Test Report: Docker_Windows 13439

                    
                      75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb:2022-02-07:22575
                    
                

Test fail (11/273)

x
+
TestAddons/parallel/MetricsServer (329.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 52.5985ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-6b76bd68b6-gc9ll" [6b1f7007-f81a-436b-ab42-07aec87b70e8] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0458265s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (908.61ms)

                                                
                                                
** stderr ** 
	W0207 19:30:04.422327    7476 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m8.4223278s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m8.4223278s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (539.854ms)

                                                
                                                
** stderr ** 
	W0207 19:30:07.506862   11244 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m11.5068626s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m11.5068626s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (465.3099ms)

                                                
                                                
** stderr ** 
	W0207 19:30:10.818929    8436 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m14.8189293s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m14.8189293s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (443.5836ms)

                                                
                                                
** stderr ** 
	W0207 19:30:17.527497   13176 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m21.5265002s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m21.5265002s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (429.0252ms)

                                                
                                                
** stderr ** 
	W0207 19:30:27.175660    2100 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m31.1756607s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m31.1756607s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (457.9762ms)

                                                
                                                
** stderr ** 
	W0207 19:30:42.136961    7796 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m46.1369614s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 5m46.1369614s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (407.9272ms)

                                                
                                                
** stderr ** 
	W0207 19:31:02.914294    7312 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 6m6.9142942s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 6m6.9142942s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (369.7098ms)

                                                
                                                
** stderr ** 
	W0207 19:31:44.647725    6312 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 6m48.6477254s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 6m48.6477254s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (371.7915ms)

                                                
                                                
** stderr ** 
	W0207 19:32:47.784188    8488 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 7m51.7841889s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 7m51.7841889s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (404.9453ms)

                                                
                                                
** stderr ** 
	W0207 19:33:58.885450   12712 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 9m2.8854507s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 9m2.8854507s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207192142-8704 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 top pods -n kube-system: exit status 1 (340.7235ms)

                                                
                                                
** stderr ** 
	W0207 19:34:57.899653   10628 top_pod.go:274] Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 10m1.8996533s
	error: Metrics not available for pod kube-system/coredns-64897985d-gsh99, age: 10m1.8996533s

                                                
                                                
** /stderr **
addons_test.go:380: failed checking metric server: exit status 1
addons_test.go:383: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:383: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable metrics-server --alsologtostderr -v=1: (5.7693336s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20220207192142-8704
helpers_test.go:232: (dbg) Done: docker inspect addons-20220207192142-8704: (1.3074784s)
helpers_test.go:236: (dbg) docker inspect addons-20220207192142-8704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81df432bf00367a3498b0f2faea53e8c4b2ef720d122a46220637d8627d28bd1",
	        "Created": "2022-02-07T19:23:45.1244269Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-07T19:23:48.1174317Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "/var/lib/docker/containers/81df432bf00367a3498b0f2faea53e8c4b2ef720d122a46220637d8627d28bd1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81df432bf00367a3498b0f2faea53e8c4b2ef720d122a46220637d8627d28bd1/hostname",
	        "HostsPath": "/var/lib/docker/containers/81df432bf00367a3498b0f2faea53e8c4b2ef720d122a46220637d8627d28bd1/hosts",
	        "LogPath": "/var/lib/docker/containers/81df432bf00367a3498b0f2faea53e8c4b2ef720d122a46220637d8627d28bd1/81df432bf00367a3498b0f2faea53e8c4b2ef720d122a46220637d8627d28bd1-json.log",
	        "Name": "/addons-20220207192142-8704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20220207192142-8704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20220207192142-8704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1facd0c85904f8ba413794193210b76948b0ff154ab60efaef716bd20ebe8f2-init/diff:/var/lib/docker/overlay2/75e1ee3c034aacc956b8c3ecc7ab61ac5c38e660082589ceb37efd240a771cc5/diff:/var/lib/docker/overlay2/189fe0ac50cbe021b1f58d4d3552848c814165ab41c880cc414a3d772ecf8a17/diff:/var/lib/docker/overlay2/1825c50829a945a491708c366e3adc3e6d891ec2fcbd7f13b41f06c64baa55d9/diff:/var/lib/docker/overlay2/0b9358d8d7de1369e9019714824c8f1007b6c08b3ebf296b7b1288610816a2ce/diff:/var/lib/docker/overlay2/689f6514ad269d91cd1861629d1949b077031e825417ef4dfb5621888699407b/diff:/var/lib/docker/overlay2/8dff862a1c6a46807e22567df5955e49a8aa3d0a1f2ad45ca46f2ab5374556fe/diff:/var/lib/docker/overlay2/ee466d69c85d056ef8068fd5652d1a05e5ca08f4f2880d8156cd2f212ceaaaa6/diff:/var/lib/docker/overlay2/86890d1d8e6826b123ee9ec4c463f6f91ad837f07b7147e0c6ef8c7e17b601da/diff:/var/lib/docker/overlay2/b657d041c7bdb28ab2fd58a8e3615ec574e7e5fcace80e88f630332a1ff67ff7/diff:/var/lib/docker/overlay2/4339b0
c7baf085cb3dc647fb19cd967f89fdd4e316e2bc806815c81fc17efc59/diff:/var/lib/docker/overlay2/36993c24ec6e3eb908331a1c00b702e3326415b7124d4d1788747ba328eb6e2a/diff:/var/lib/docker/overlay2/5b68d569c7973aeabb60b4d744a1b86cc3ebb8b284e55bbbe33576e97e3ac021/diff:/var/lib/docker/overlay2/57b6ab85187eac783753b7bdcafb75e9d26d3e9d22b614bbfa42fbf4a6e879f8/diff:/var/lib/docker/overlay2/e5f2f9b80a695305ffbe047f65db35cc276ac41f987ec84a5742b3769918cb79/diff:/var/lib/docker/overlay2/06d7d08e9ebfbe3202537757cc03ccaa87b749e7dd8354ae1978c44a1b14a690/diff:/var/lib/docker/overlay2/44604b9a5d1c918e1d3ebe374cc5b01af83b10aef4cbf54e72d7fd0b7be60646/diff:/var/lib/docker/overlay2/9d28038d0516655f0a12f3ec5220089de0a54540a27220e4f412dd3acc577f9b/diff:/var/lib/docker/overlay2/ec704366d20c2f84ce0d53c1b278507dc9cc66331cba15d90521a96d118d45af/diff:/var/lib/docker/overlay2/32b5b8eb800bf64445a63842604512878f22712d00a869b2104a1b528d6e8010/diff:/var/lib/docker/overlay2/6ff5152a44a5b0fd36c63aa1c7199ff420477113981a2dd750c29f82e1509669/diff:/var/lib/d
ocker/overlay2/b42f3edd75dd995daac9924998fafd7fe1b919f222b8185a3dfeef9a762660c7/diff:/var/lib/docker/overlay2/3cd19c2de3ea2cc271124c2c82db46bf5f550625dd02a5cde5c517af93c73caa/diff:/var/lib/docker/overlay2/b41830a6d20150650c5fb37bb60e7c06147734911fda7300a739cd023bb4789a/diff:/var/lib/docker/overlay2/925bf7a180aeb21aee1f13bf31ccc1f05a642fd383aabb499148885dcac5cfeb/diff:/var/lib/docker/overlay2/a5ec93ff5dc3e9d4a9975d8f1176019d102f9e8c319a4d5016f842be26bb5671/diff:/var/lib/docker/overlay2/37e01c18dc12ba0b9bd89093b244ef29456df1fb30fc4a8c3e5596b7b56ada0a/diff:/var/lib/docker/overlay2/6ce0b6587d0750a0ef5383637b91df31d4c1619e3a494b84c8714c5beebf1dbc/diff:/var/lib/docker/overlay2/8f4e875a02344a4926d7f5ad052151ca0eef0364a189b7ca60ebb338213d7c8e/diff:/var/lib/docker/overlay2/2790936ada4be199505c2cab1447b90a25076c4d2cbceadeb4a52026c71b9c60/diff:/var/lib/docker/overlay2/231fcc4021464c7f510cca7eecaabc94216fcc70cb62f97465c0d546064b25b8/diff:/var/lib/docker/overlay2/30845ecf75e8fd0fa04703004fc686bb8aff8eabe9437f4e7a1096a5bca
060a3/diff:/var/lib/docker/overlay2/3ae1acee47e31df704424e5e9dbaed72199c1cb3a318825a84cc9d2f08f1d807/diff:/var/lib/docker/overlay2/f9fe697b5ffab06c3cc31c3e2b7d924c32d4f0f4ee8fd29cb5e2b46e586b4d4d/diff:/var/lib/docker/overlay2/68afa844b9fe835f1997b14fe394dac6238ee6a39aa0abfc34a93c062d58f819/diff:/var/lib/docker/overlay2/94b84dda68e5a3dbf4319437e5d026f2c5c705496ca2d9922f7e865879146b56/diff:/var/lib/docker/overlay2/f133dd3fe2bf48f8bd9dced36254f4cc973685d2ddde9ee6e0f2467ea7d34592/diff:/var/lib/docker/overlay2/dafd5505dd817285a71ea03b36fb5684a0c844441c07c909d1e6c47b874b33d4/diff:/var/lib/docker/overlay2/c714cab2096f6325d72b4b73673c329c5db40f169c0d6d5d034bf8af87b90983/diff:/var/lib/docker/overlay2/ea71191eaaa01123105da39dc897cb6e11c028c8a2e91dc62ff85bb5e0fb1884/diff:/var/lib/docker/overlay2/6c554fb0a2463d3ef05cdb7858f9788626b5c72dbb4ea5a0431ec665de90dc74/diff:/var/lib/docker/overlay2/01e92d0b67f2be5d7d6ba3f84ffac8ad1e0c516b03b45346070503f62de32e5a/diff:/var/lib/docker/overlay2/f5f6f40c4df999e1ae2e5733fa6aad1cf8963e
bd6e2b9f849164ca5c149a4262/diff:/var/lib/docker/overlay2/e1eb2f89916ebfdb9a8d5aacfd9618edc370a018de0114d193b6069979c02aa7/diff:/var/lib/docker/overlay2/0e35d26329f1b7cf4e1b2bb03588192d3ea37764eab1ccc5a598db2164c932d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1facd0c85904f8ba413794193210b76948b0ff154ab60efaef716bd20ebe8f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1facd0c85904f8ba413794193210b76948b0ff154ab60efaef716bd20ebe8f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1facd0c85904f8ba413794193210b76948b0ff154ab60efaef716bd20ebe8f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20220207192142-8704",
	                "Source": "/var/lib/docker/volumes/addons-20220207192142-8704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20220207192142-8704",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20220207192142-8704",
	                "name.minikube.sigs.k8s.io": "addons-20220207192142-8704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14a94bdf4806505e0855ea78f4a888402f20990914c05d312bda00d850f7ccc7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61971"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61972"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61973"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61974"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61970"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/14a94bdf4806",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20220207192142-8704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "81df432bf003",
	                        "addons-20220207192142-8704"
	                    ],
	                    "NetworkID": "17b175d8f74f4f90eb5a739736e7be51865b0b2766cac0b2dd0ceb5dfb3fb27c",
	                    "EndpointID": "7a4665fdfa7ee8fb6183633dd36d39d26591e37d0f67c0284f0909158a47e6ce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-20220207192142-8704 -n addons-20220207192142-8704
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-20220207192142-8704 -n addons-20220207192142-8704: (7.293741s)
helpers_test.go:245: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220207192142-8704 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220207192142-8704 logs -n 25: (7.4781369s)
helpers_test.go:253: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| Command |                Args                 |               Profile               |       User        | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| delete  | --all                               | download-only-20220207191910-8704   | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:20:01 GMT | Mon, 07 Feb 2022 19:20:13 GMT |
	| delete  | -p                                  | download-only-20220207191910-8704   | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:20:14 GMT | Mon, 07 Feb 2022 19:20:21 GMT |
	|         | download-only-20220207191910-8704   |                                     |                   |         |                               |                               |
	| delete  | -p                                  | download-only-20220207191910-8704   | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:20:21 GMT | Mon, 07 Feb 2022 19:20:28 GMT |
	|         | download-only-20220207191910-8704   |                                     |                   |         |                               |                               |
	| delete  | -p                                  | download-docker-20220207192028-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:21:17 GMT | Mon, 07 Feb 2022 19:21:26 GMT |
	|         | download-docker-20220207192028-8704 |                                     |                   |         |                               |                               |
	| delete  | -p                                  | binary-mirror-20220207192126-8704   | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:21:34 GMT | Mon, 07 Feb 2022 19:21:42 GMT |
	|         | binary-mirror-20220207192126-8704   |                                     |                   |         |                               |                               |
	| start   | -p addons-20220207192142-8704       | addons-20220207192142-8704          | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:21:43 GMT | Mon, 07 Feb 2022 19:29:58 GMT |
	|         | --wait=true --memory=4000           |                                     |                   |         |                               |                               |
	|         | --alsologtostderr                   |                                     |                   |         |                               |                               |
	|         | --addons=registry                   |                                     |                   |         |                               |                               |
	|         | --addons=metrics-server             |                                     |                   |         |                               |                               |
	|         | --addons=olm                        |                                     |                   |         |                               |                               |
	|         | --addons=volumesnapshots            |                                     |                   |         |                               |                               |
	|         | --addons=csi-hostpath-driver        |                                     |                   |         |                               |                               |
	|         | --addons=gcp-auth                   |                                     |                   |         |                               |                               |
	|         | --driver=docker                     |                                     |                   |         |                               |                               |
	|         | --addons=ingress                    |                                     |                   |         |                               |                               |
	|         | --addons=ingress-dns                |                                     |                   |         |                               |                               |
	|         | --addons=helm-tiller                |                                     |                   |         |                               |                               |
	| -p      | addons-20220207192142-8704          | addons-20220207192142-8704          | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:30:23 GMT | Mon, 07 Feb 2022 19:30:28 GMT |
	|         | addons disable helm-tiller          |                                     |                   |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |                   |         |                               |                               |
	| -p      | addons-20220207192142-8704 ssh      | addons-20220207192142-8704          | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:30:21 GMT | Mon, 07 Feb 2022 19:30:28 GMT |
	|         | curl -s http://127.0.0.1/ -H        |                                     |                   |         |                               |                               |
	|         | 'Host: nginx.example.com'           |                                     |                   |         |                               |                               |
	| -p      | addons-20220207192142-8704          | addons-20220207192142-8704          | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:31:14 GMT | Mon, 07 Feb 2022 19:31:28 GMT |
	|         | addons disable                      |                                     |                   |         |                               |                               |
	|         | csi-hostpath-driver                 |                                     |                   |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |                   |         |                               |                               |
	| -p      | addons-20220207192142-8704          | addons-20220207192142-8704          | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:31:28 GMT | Mon, 07 Feb 2022 19:31:34 GMT |
	|         | addons disable volumesnapshots      |                                     |                   |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |                   |         |                               |                               |
	| -p      | addons-20220207192142-8704          | addons-20220207192142-8704          | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 19:34:58 GMT | Mon, 07 Feb 2022 19:35:03 GMT |
	|         | addons disable metrics-server       |                                     |                   |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |                   |         |                               |                               |
	|---------|-------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:21:43
	Running on machine: minikube3
	Binary: Built with gc go1.17.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:21:43.156105    9696 out.go:297] Setting OutFile to fd 1012 ...
	I0207 19:21:43.212490    9696 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:21:43.212490    9696 out.go:310] Setting ErrFile to fd 688...
	I0207 19:21:43.212490    9696 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:21:43.226088    9696 out.go:304] Setting JSON to false
	I0207 19:21:43.228496    9696 start.go:112] hostinfo: {"hostname":"minikube3","uptime":429322,"bootTime":1643832381,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 19:21:43.229462    9696 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 19:21:43.235047    9696 out.go:176] * [addons-20220207192142-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 19:21:43.235371    9696 notify.go:174] Checking for updates...
	I0207 19:21:43.238927    9696 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 19:21:43.241463    9696 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 19:21:43.244437    9696 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:21:43.247083    9696 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:21:43.247439    9696 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:21:45.682821    9696 docker.go:132] docker version: linux-20.10.12
	I0207 19:21:45.688399    9696 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:21:47.683503    9696 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (1.9950661s)
	I0207 19:21:47.684407    9696 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:21:46.7747321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:21:47.691667    9696 out.go:176] * Using the docker driver based on user configuration
	I0207 19:21:47.691667    9696 start.go:281] selected driver: docker
	I0207 19:21:47.691667    9696 start.go:798] validating driver "docker" against <nil>
	I0207 19:21:47.692258    9696 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 19:21:47.757794    9696 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:21:49.765567    9696 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.0077632s)
	I0207 19:21:49.765854    9696 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:21:48.8479703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:21:49.766123    9696 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:21:49.766752    9696 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 19:21:49.766844    9696 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 19:21:49.766963    9696 cni.go:93] Creating CNI manager for ""
	I0207 19:21:49.767041    9696 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:21:49.767041    9696 start_flags.go:302] config:
	{Name:addons-20220207192142-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:addons-20220207192142-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:21:49.772515    9696 out.go:176] * Starting control plane node addons-20220207192142-8704 in cluster addons-20220207192142-8704
	I0207 19:21:49.772515    9696 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:21:49.774637    9696 out.go:176] * Pulling base image ...
	I0207 19:21:49.774637    9696 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:21:49.775267    9696 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:21:49.775472    9696 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:21:49.775514    9696 cache.go:57] Caching tarball of preloaded images
	I0207 19:21:49.775514    9696 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 19:21:49.776103    9696 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 19:21:49.776391    9696 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\config.json ...
	I0207 19:21:49.776391    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\config.json: {Name:mk747fd43e61fe18c4076997c0bbfc7eb2da70c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:21:50.949056    9696 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 to local cache
	I0207 19:21:50.949188    9696 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:21:50.949637    9696 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:21:50.949673    9696 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local cache directory
	I0207 19:21:50.949831    9696 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local cache directory, skipping pull
	I0207 19:21:50.949831    9696 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in cache, skipping pull
	I0207 19:21:50.949997    9696 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 as a tarball
	I0207 19:21:50.949997    9696 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 from local cache
	I0207 19:21:50.949997    9696 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:22:40.242259    9696 cache.go:165] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 from cached tarball
	I0207 19:22:40.242259    9696 cache.go:208] Successfully downloaded all kic artifacts
	I0207 19:22:40.242259    9696 start.go:313] acquiring machines lock for addons-20220207192142-8704: {Name:mk05b1fb44b84fbac54d29e6f7e2f755a3492408 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:22:40.243023    9696 start.go:317] acquired machines lock for "addons-20220207192142-8704" in 232.2µs
	I0207 19:22:40.243253    9696 start.go:89] Provisioning new machine with config: &{Name:addons-20220207192142-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:addons-20220207192142-8704 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:22:40.243423    9696 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:22:40.247612    9696 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0207 19:22:40.248391    9696 start.go:160] libmachine.API.Create for "addons-20220207192142-8704" (driver="docker")
	I0207 19:22:40.248499    9696 client.go:168] LocalClient.Create starting
	I0207 19:22:40.249473    9696 main.go:130] libmachine: Creating CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 19:22:40.311762    9696 main.go:130] libmachine: Creating client certificate: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 19:22:40.669509    9696 cli_runner.go:133] Run: docker network inspect addons-20220207192142-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:22:41.839016    9696 cli_runner.go:180] docker network inspect addons-20220207192142-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:22:41.839016    9696 cli_runner.go:186] Completed: docker network inspect addons-20220207192142-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.169393s)
	I0207 19:22:41.844837    9696 network_create.go:254] running [docker network inspect addons-20220207192142-8704] to gather additional debugging logs...
	I0207 19:22:41.845410    9696 cli_runner.go:133] Run: docker network inspect addons-20220207192142-8704
	W0207 19:22:43.024973    9696 cli_runner.go:180] docker network inspect addons-20220207192142-8704 returned with exit code 1
	I0207 19:22:43.527282    9696 cli_runner.go:186] Completed: docker network inspect addons-20220207192142-8704: (1.1795576s)
	I0207 19:22:43.527282    9696 network_create.go:257] error running [docker network inspect addons-20220207192142-8704]: docker network inspect addons-20220207192142-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220207192142-8704
	I0207 19:22:43.527282    9696 network_create.go:259] output of [docker network inspect addons-20220207192142-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220207192142-8704
	
	** /stderr **
	I0207 19:22:43.535096    9696 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:22:44.664711    9696 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1296089s)
	I0207 19:22:44.687081    9696 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e2d0] misses:0}
	I0207 19:22:44.687081    9696 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:22:44.687081    9696 network_create.go:106] attempt to create docker network addons-20220207192142-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 19:22:44.693668    9696 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220207192142-8704
	I0207 19:22:47.053247    9696 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220207192142-8704: (2.3595669s)
	I0207 19:22:47.053682    9696 network_create.go:90] docker network addons-20220207192142-8704 192.168.49.0/24 created
	I0207 19:22:47.053682    9696 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20220207192142-8704" container
	I0207 19:22:47.063588    9696 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:22:48.271898    9696 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.2080759s)
	I0207 19:22:50.044826    9696 cli_runner.go:133] Run: docker volume create addons-20220207192142-8704 --label name.minikube.sigs.k8s.io=addons-20220207192142-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:22:51.331878    9696 cli_runner.go:186] Completed: docker volume create addons-20220207192142-8704 --label name.minikube.sigs.k8s.io=addons-20220207192142-8704 --label created_by.minikube.sigs.k8s.io=true: (1.2864111s)
	I0207 19:22:51.331914    9696 oci.go:102] Successfully created a docker volume addons-20220207192142-8704
	I0207 19:22:51.338291    9696 cli_runner.go:133] Run: docker run --rm --name addons-20220207192142-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220207192142-8704 --entrypoint /usr/bin/test -v addons-20220207192142-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:23:01.559670    9696 cli_runner.go:186] Completed: docker run --rm --name addons-20220207192142-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220207192142-8704 --entrypoint /usr/bin/test -v addons-20220207192142-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (10.2213263s)
	I0207 19:23:01.559670    9696 oci.go:106] Successfully prepared a docker volume addons-20220207192142-8704
	I0207 19:23:01.559670    9696 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:23:01.559670    9696 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:23:01.565137    9696 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220207192142-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:23:39.812989    9696 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220207192142-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (38.2474788s)
	I0207 19:23:39.813066    9696 kic.go:188] duration metric: took 38.253168 seconds to extract preloaded images to volume
	I0207 19:23:39.824485    9696 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:23:41.878997    9696 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.0545016s)
	I0207 19:23:41.879594    9696 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-02-07 19:23:40.9518179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:23:41.891045    9696 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:23:43.879862    9696 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.9888071s)
	I0207 19:23:43.885916    9696 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20220207192142-8704 --name addons-20220207192142-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220207192142-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20220207192142-8704 --network addons-20220207192142-8704 --ip 192.168.49.2 --volume addons-20220207192142-8704:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	I0207 19:23:48.235167    9696 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20220207192142-8704 --name addons-20220207192142-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220207192142-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20220207192142-8704 --network addons-20220207192142-8704 --ip 192.168.49.2 --volume addons-20220207192142-8704:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (4.3491914s)
	I0207 19:23:48.240470    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Running}}
	I0207 19:23:49.603216    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Running}}: (1.3626962s)
	I0207 19:23:49.609038    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:23:50.810178    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (1.201134s)
	I0207 19:23:50.816853    9696 cli_runner.go:133] Run: docker exec addons-20220207192142-8704 stat /var/lib/dpkg/alternatives/iptables
	I0207 19:23:52.389967    9696 cli_runner.go:186] Completed: docker exec addons-20220207192142-8704 stat /var/lib/dpkg/alternatives/iptables: (1.573106s)
	I0207 19:23:52.390275    9696 oci.go:281] the created container "addons-20220207192142-8704" has a running status.
	I0207 19:23:52.390500    9696 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa...
	I0207 19:23:52.622334    9696 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0207 19:23:53.948882    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:23:55.140790    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (1.1919014s)
	I0207 19:23:55.153796    9696 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0207 19:23:55.153796    9696 kic_runner.go:114] Args: [docker exec --privileged addons-20220207192142-8704 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0207 19:23:56.703137    9696 kic_runner.go:123] Done: [docker exec --privileged addons-20220207192142-8704 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.549285s)
	I0207 19:23:56.706129    9696 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa...
	I0207 19:23:57.184048    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:23:58.347636    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (1.1631442s)
	I0207 19:23:58.347636    9696 machine.go:88] provisioning docker machine ...
	I0207 19:23:58.347636    9696 ubuntu.go:169] provisioning hostname "addons-20220207192142-8704"
	I0207 19:23:58.354223    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:23:59.519597    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.165368s)
	I0207 19:23:59.523640    9696 main.go:130] libmachine: Using SSH client type: native
	I0207 19:23:59.529954    9696 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x10cb900] 0x10ce7c0 <nil>  [] 0s} 127.0.0.1 61971 <nil> <nil>}
	I0207 19:23:59.529954    9696 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20220207192142-8704 && echo "addons-20220207192142-8704" | sudo tee /etc/hostname
	I0207 19:23:59.788399    9696 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20220207192142-8704
	
	I0207 19:23:59.795315    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:00.968530    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.1731658s)
	I0207 19:24:00.972258    9696 main.go:130] libmachine: Using SSH client type: native
	I0207 19:24:00.972664    9696 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x10cb900] 0x10ce7c0 <nil>  [] 0s} 127.0.0.1 61971 <nil> <nil>}
	I0207 19:24:00.972757    9696 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20220207192142-8704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20220207192142-8704/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20220207192142-8704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 19:24:01.136790    9696 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 19:24:01.136790    9696 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0207 19:24:01.136790    9696 ubuntu.go:177] setting up certificates
	I0207 19:24:01.136790    9696 provision.go:83] configureAuth start
	I0207 19:24:01.143023    9696 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220207192142-8704
	I0207 19:24:02.271884    9696 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220207192142-8704: (1.1286135s)
	I0207 19:24:02.271976    9696 provision.go:138] copyHostCerts
	I0207 19:24:02.272061    9696 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0207 19:24:02.273297    9696 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0207 19:24:02.274675    9696 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0207 19:24:02.275907    9696 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-20220207192142-8704 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20220207192142-8704]
	I0207 19:24:02.523900    9696 provision.go:172] copyRemoteCerts
	I0207 19:24:02.534339    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 19:24:02.534339    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:03.724675    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.1901155s)
	I0207 19:24:03.724985    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:24:03.876171    9696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3418249s)
	I0207 19:24:03.876765    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0207 19:24:03.935521    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0207 19:24:03.997416    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0207 19:24:04.060144    9696 provision.go:86] duration metric: configureAuth took 2.9232721s
	I0207 19:24:04.060171    9696 ubuntu.go:193] setting minikube options for container-runtime
	I0207 19:24:04.060171    9696 config.go:176] Loaded profile config "addons-20220207192142-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:24:04.068813    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:05.238076    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.1689605s)
	I0207 19:24:05.243432    9696 main.go:130] libmachine: Using SSH client type: native
	I0207 19:24:05.243432    9696 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x10cb900] 0x10ce7c0 <nil>  [] 0s} 127.0.0.1 61971 <nil> <nil>}
	I0207 19:24:05.243432    9696 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0207 19:24:05.468858    9696 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0207 19:24:05.468858    9696 ubuntu.go:71] root file system type: overlay
	I0207 19:24:05.469479    9696 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0207 19:24:05.477214    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:06.664069    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.1866832s)
	I0207 19:24:06.669693    9696 main.go:130] libmachine: Using SSH client type: native
	I0207 19:24:06.669693    9696 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x10cb900] 0x10ce7c0 <nil>  [] 0s} 127.0.0.1 61971 <nil> <nil>}
	I0207 19:24:06.670272    9696 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0207 19:24:06.900699    9696 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0207 19:24:06.912660    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:08.080847    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.1679756s)
	I0207 19:24:08.084464    9696 main.go:130] libmachine: Using SSH client type: native
	I0207 19:24:08.085008    9696 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x10cb900] 0x10ce7c0 <nil>  [] 0s} 127.0.0.1 61971 <nil> <nil>}
	I0207 19:24:08.085008    9696 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0207 19:24:11.229428    9696 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-07 19:24:06.887470100 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0207 19:24:11.229506    9696 machine.go:91] provisioned docker machine in 12.8818034s
	I0207 19:24:11.229586    9696 client.go:171] LocalClient.Create took 1m30.9805554s
	I0207 19:24:11.229793    9696 start.go:168] duration metric: libmachine.API.Create for "addons-20220207192142-8704" took 1m30.9808867s
	I0207 19:24:11.229793    9696 start.go:267] post-start starting for "addons-20220207192142-8704" (driver="docker")
	I0207 19:24:11.229793    9696 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0207 19:24:11.241764    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0207 19:24:11.250550    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:12.448425    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.1978695s)
	I0207 19:24:12.448656    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:24:12.612282    9696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3697863s)
	I0207 19:24:12.620742    9696 ssh_runner.go:195] Run: cat /etc/os-release
	I0207 19:24:12.634174    9696 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0207 19:24:12.634174    9696 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0207 19:24:12.634174    9696 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0207 19:24:12.634174    9696 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0207 19:24:12.634174    9696 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0207 19:24:12.634174    9696 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0207 19:24:12.634977    9696 start.go:270] post-start completed in 1.4051771s
	I0207 19:24:12.644044    9696 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220207192142-8704
	I0207 19:24:13.766589    9696 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220207192142-8704: (1.1224664s)
	I0207 19:24:13.766740    9696 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\config.json ...
	I0207 19:24:13.779788    9696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:24:13.783388    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:14.951625    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.1681379s)
	I0207 19:24:14.951704    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:24:15.091419    9696 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3116239s)
	I0207 19:24:15.091419    9696 start.go:129] duration metric: createHost completed in 1m34.8473856s
	I0207 19:24:15.091419    9696 start.go:80] releasing machines lock for "addons-20220207192142-8704", held for 1m34.8478664s
	I0207 19:24:15.099147    9696 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220207192142-8704
	I0207 19:24:16.258540    9696 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220207192142-8704: (1.1593867s)
	I0207 19:24:16.260591    9696 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0207 19:24:16.266884    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:16.268191    9696 ssh_runner.go:195] Run: systemctl --version
	I0207 19:24:16.274622    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:17.468275    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.2013322s)
	I0207 19:24:17.468543    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:24:17.484147    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.2095182s)
	I0207 19:24:17.484426    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:24:17.721323    9696 ssh_runner.go:235] Completed: systemctl --version: (1.4531239s)
	I0207 19:24:17.721323    9696 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4607236s)
	I0207 19:24:17.730217    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0207 19:24:17.774808    9696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:24:17.809461    9696 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0207 19:24:17.818026    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0207 19:24:17.850641    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0207 19:24:17.903903    9696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0207 19:24:18.065625    9696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0207 19:24:18.218232    9696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:24:18.266781    9696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0207 19:24:18.415047    9696 ssh_runner.go:195] Run: sudo systemctl start docker
	I0207 19:24:18.456176    9696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:24:18.594789    9696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:24:18.699704    9696 out.go:203] * Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	I0207 19:24:18.706089    9696 cli_runner.go:133] Run: docker exec -t addons-20220207192142-8704 dig +short host.docker.internal
	I0207 19:24:20.319684    9696 cli_runner.go:186] Completed: docker exec -t addons-20220207192142-8704 dig +short host.docker.internal: (1.6135863s)
	I0207 19:24:20.319684    9696 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0207 19:24:20.329059    9696 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0207 19:24:20.346567    9696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:24:20.381644    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:24:21.568002    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.186235s)
	I0207 19:24:21.574244    9696 out.go:176]   - kubelet.housekeeping-interval=5m
	I0207 19:24:21.574244    9696 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:24:21.579949    9696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:24:21.660860    9696 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:24:21.660860    9696 docker.go:537] Images already preloaded, skipping extraction
	I0207 19:24:21.667213    9696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:24:21.742539    9696 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:24:21.742539    9696 cache_images.go:84] Images are preloaded, skipping loading
	I0207 19:24:21.752269    9696 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0207 19:24:21.964320    9696 cni.go:93] Creating CNI manager for ""
	I0207 19:24:21.964320    9696 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:24:21.964320    9696 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0207 19:24:21.964320    9696 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20220207192142-8704 NodeName:addons-20220207192142-8704 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0207 19:24:21.964320    9696 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20220207192142-8704"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0207 19:24:21.965040    9696 kubeadm.go:935] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20220207192142-8704 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:addons-20220207192142-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0207 19:24:21.973931    9696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0207 19:24:22.002890    9696 binaries.go:44] Found k8s binaries, skipping transfer
	I0207 19:24:22.011571    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0207 19:24:22.039403    9696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0207 19:24:22.085625    9696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0207 19:24:22.127875    9696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0207 19:24:22.183723    9696 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0207 19:24:22.197409    9696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:24:22.227576    9696 certs.go:54] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704 for IP: 192.168.49.2
	I0207 19:24:22.227576    9696 certs.go:187] generating minikubeCA CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0207 19:24:22.367475    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt ...
	I0207 19:24:22.367475    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt: {Name:mk1d1f25727e6fcaf35d7d74de783ad2d2c6be81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.372568    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key ...
	I0207 19:24:22.372568    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key: {Name:mkffeaed7182692572a4aaea1f77b60f45c78854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.372568    9696 certs.go:187] generating proxyClientCA CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0207 19:24:22.627416    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0207 19:24:22.627416    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkc09bedb222360a1dcc92648b423932b0197d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.627416    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key ...
	I0207 19:24:22.627416    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk23d29d7cc073007c63c291d9cf6fa322998d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.627416    9696 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.key
	I0207 19:24:22.627416    9696 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt with IP's: []
	I0207 19:24:22.751099    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt ...
	I0207 19:24:22.751099    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: {Name:mkab3fb1ff2dbc9b8a240b292dcbe1a6a6668aa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.752324    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.key ...
	I0207 19:24:22.752324    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.key: {Name:mkbfb24b64f1572a056a108df72fb68f4d259d42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.752324    9696 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.key.dd3b5fb2
	I0207 19:24:22.752324    9696 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0207 19:24:22.960842    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.crt.dd3b5fb2 ...
	I0207 19:24:22.960842    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.crt.dd3b5fb2: {Name:mk34d039964d66013d362f669a28da17b7eac525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.968011    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.key.dd3b5fb2 ...
	I0207 19:24:22.968067    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.key.dd3b5fb2: {Name:mkb415c5ee604c2522c2e09bbff61e40687830bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:22.969296    9696 certs.go:320] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.crt
	I0207 19:24:22.976404    9696 certs.go:324] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.key
	I0207 19:24:22.978062    9696 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.key
	I0207 19:24:22.978062    9696 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.crt with IP's: []
	I0207 19:24:23.308182    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.crt ...
	I0207 19:24:23.308182    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.crt: {Name:mk2aabc6793bd465d65d1601bdcf078c7a63f5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:23.308182    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.key ...
	I0207 19:24:23.308182    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.key: {Name:mka14c819471c041aeaf5c787866991ff026cec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:23.318413    9696 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0207 19:24:23.318413    9696 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0207 19:24:23.318413    9696 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0207 19:24:23.318413    9696 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0207 19:24:23.318413    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0207 19:24:23.377912    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0207 19:24:23.429900    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0207 19:24:23.481238    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0207 19:24:23.539035    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0207 19:24:23.591620    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0207 19:24:23.649025    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0207 19:24:23.702777    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0207 19:24:23.760516    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0207 19:24:23.815693    9696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0207 19:24:23.867640    9696 ssh_runner.go:195] Run: openssl version
	I0207 19:24:23.892325    9696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0207 19:24:23.931599    9696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:24:23.947775    9696 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  7 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:24:23.957306    9696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:24:23.981445    9696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0207 19:24:24.008519    9696 kubeadm.go:390] StartCluster: {Name:addons-20220207192142-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:addons-20220207192142-8704 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false}
	I0207 19:24:24.016002    9696 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 19:24:24.093226    9696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0207 19:24:24.132480    9696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0207 19:24:24.161566    9696 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0207 19:24:24.172654    9696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0207 19:24:24.204584    9696 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0207 19:24:24.205211    9696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0207 19:24:44.587168    9696 out.go:203]   - Generating certificates and keys ...
	I0207 19:24:44.595549    9696 out.go:203]   - Booting up control plane ...
	I0207 19:24:44.601063    9696 out.go:203]   - Configuring RBAC rules ...
	I0207 19:24:44.603736    9696 cni.go:93] Creating CNI manager for ""
	I0207 19:24:44.603736    9696 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:24:44.603736    9696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0207 19:24:44.623474    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:44.625430    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb minikube.k8s.io/name=addons-20220207192142-8704 minikube.k8s.io/updated_at=2022_02_07T19_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:44.703772    9696 ops.go:34] apiserver oom_adj: -16
	I0207 19:24:45.009928    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:47.008685    9696 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb minikube.k8s.io/name=addons-20220207192142-8704 minikube.k8s.io/updated_at=2022_02_07T19_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (2.3832429s)
	I0207 19:24:47.109523    9696 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.0993407s)
	I0207 19:24:47.634441    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:48.122163    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:48.619551    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:49.129006    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:49.629025    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:50.138170    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:50.621466    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:51.138751    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:51.620332    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:52.129955    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:52.638490    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:53.134605    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:53.625612    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:54.122344    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:54.631979    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:55.129450    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:55.624388    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:56.132648    9696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:24:56.685428    9696 kubeadm.go:1019] duration metric: took 12.0795155s to wait for elevateKubeSystemPrivileges.
	I0207 19:24:56.685582    9696 kubeadm.go:392] StartCluster complete in 32.6768224s
	I0207 19:24:56.685702    9696 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:24:56.686006    9696 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 19:24:56.687154    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0207 19:24:56.997710    9696 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0207 19:24:58.011409    9696 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20220207192142-8704" rescaled to 1
	I0207 19:24:58.011409    9696 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:24:58.011409    9696 addons.go:415] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver gcp-auth ingress ingress-dns helm-tiller]
	I0207 19:24:58.011409    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0207 19:24:58.013175    9696 addons.go:65] Setting volumesnapshots=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting default-storageclass=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting metrics-server=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting olm=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting registry=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting storage-provisioner=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting helm-tiller=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting gcp-auth=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting ingress=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013175    9696 addons.go:65] Setting ingress-dns=true in profile "addons-20220207192142-8704"
	I0207 19:24:58.013384    9696 config.go:176] Loaded profile config "addons-20220207192142-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:24:58.017213    9696 out.go:176] * Verifying Kubernetes components...
	I0207 19:24:58.017213    9696 addons.go:153] Setting addon olm=true in "addons-20220207192142-8704"
	I0207 19:24:58.017345    9696 addons.go:153] Setting addon registry=true in "addons-20220207192142-8704"
	I0207 19:24:58.017471    9696 addons.go:153] Setting addon metrics-server=true in "addons-20220207192142-8704"
	I0207 19:24:58.017345    9696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20220207192142-8704"
	I0207 19:24:58.017471    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.017471    9696 addons.go:153] Setting addon ingress=true in "addons-20220207192142-8704"
	I0207 19:24:58.017471    9696 mustload.go:65] Loading cluster: addons-20220207192142-8704
	I0207 19:24:58.017471    9696 addons.go:153] Setting addon helm-tiller=true in "addons-20220207192142-8704"
	I0207 19:24:58.017471    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.017471    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.017471    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.017471    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.017471    9696 addons.go:153] Setting addon storage-provisioner=true in "addons-20220207192142-8704"
	W0207 19:24:58.017471    9696 addons.go:165] addon storage-provisioner should already be in state true
	I0207 19:24:58.017471    9696 addons.go:153] Setting addon ingress-dns=true in "addons-20220207192142-8704"
	I0207 19:24:58.018259    9696 config.go:176] Loaded profile config "addons-20220207192142-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:24:58.018432    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.018568    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.017345    9696 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20220207192142-8704"
	I0207 19:24:58.018691    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.017345    9696 addons.go:153] Setting addon volumesnapshots=true in "addons-20220207192142-8704"
	I0207 19:24:58.019076    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:24:58.045353    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:24:58.063482    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.064799    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.064907    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.065840    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.066389    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.071303    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.071468    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.072877    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.074064    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.074948    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.074948    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:24:58.600451    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0207 19:24:58.617515    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.481108    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4141099s)
	I0207 19:25:00.490978    9696 out.go:176] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0207 19:25:00.493987    9696 out.go:176]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0207 19:25:00.494272    9696 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0207 19:25:00.494323    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0207 19:25:00.508563    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.511005    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4395246s)
	I0207 19:25:00.511005    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4380724s)
	I0207 19:25:00.513846    9696 out.go:176]   - Using image quay.io/operator-framework/olm
	I0207 19:25:00.516432    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0207 19:25:00.519023    9696 out.go:176]   - Using image quay.io/operatorhubio/catalog
	I0207 19:25:00.516432    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0207 19:25:00.519023    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0207 19:25:00.526276    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4627003s)
	I0207 19:25:00.539512    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.544549    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4795873s)
	I0207 19:25:00.547041    9696 out.go:176] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0207 19:25:00.545192    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4735989s)
	I0207 19:25:00.550816    9696 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/controller:v1.1.1
	I0207 19:25:00.553382    9696 out.go:176]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0207 19:25:00.556259    9696 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0207 19:25:00.553382    9696 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0207 19:25:00.556448    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0207 19:25:00.558082    9696 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0207 19:25:00.559288    9696 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0207 19:25:00.559316    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15567 bytes)
	I0207 19:25:00.561153    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4861472s)
	I0207 19:25:00.564178    9696 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0207 19:25:00.564963    9696 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:25:00.565030    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0207 19:25:00.565361    9696 addons.go:348] installing /etc/kubernetes/addons/crds.yaml
	I0207 19:25:00.565361    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/crds.yaml (636901 bytes)
	I0207 19:25:00.568669    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.5027165s)
	I0207 19:25:00.569010    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.4936003s)
	I0207 19:25:00.572369    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0207 19:25:00.575089    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0207 19:25:00.572369    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.573822    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.577927    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0207 19:25:00.580107    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0207 19:25:00.578640    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.591493    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0207 19:25:00.592853    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0207 19:25:00.597665    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0207 19:25:00.592853    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.599592    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.601497    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0207 19:25:00.604826    9696 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0207 19:25:00.602930    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.5288034s)
	I0207 19:25:00.605498    9696 addons.go:348] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0207 19:25:00.605565    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0207 19:25:00.611523    9696 out.go:176]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0207 19:25:00.611716    9696 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0207 19:25:00.611716    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0207 19:25:00.622870    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.5580577s)
	I0207 19:25:00.622999    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:25:00.625688    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.632137    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.635827    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:00.992605    9696 addons.go:153] Setting addon default-storageclass=true in "addons-20220207192142-8704"
	W0207 19:25:00.995191    9696 addons.go:165] addon default-storageclass should already be in state true
	I0207 19:25:00.995191    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:25:01.019758    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.4022305s)
	I0207 19:25:01.020063    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:25:01.046652    9696 node_ready.go:35] waiting up to 6m0s for node "addons-20220207192142-8704" to be "Ready" ...
	I0207 19:25:01.097355    9696 node_ready.go:49] node "addons-20220207192142-8704" has status "Ready":"True"
	I0207 19:25:01.097355    9696 node_ready.go:38] duration metric: took 50.7024ms waiting for node "addons-20220207192142-8704" to be "Ready" ...
	I0207 19:25:01.097355    9696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0207 19:25:01.211173    9696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2xfj5" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:02.881896    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.3423719s)
	I0207 19:25:02.883468    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:02.942808    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.3105565s)
	I0207 19:25:02.942808    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:02.968120    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.3677507s)
	I0207 19:25:02.968187    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:02.987633    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.4783862s)
	I0207 19:25:02.989339    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:03.004573    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.427132s)
	I0207 19:25:03.004758    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:03.021985    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.4192109s)
	I0207 19:25:03.026432    9696 out.go:176] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 61974 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 19:25:03.029275    9696 out.go:176] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0207 19:25:03.031656    9696 out.go:176]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0207 19:25:03.034080    9696 out.go:176]   - Using image registry:2.7.1
	I0207 19:25:03.034080    9696 addons.go:348] installing /etc/kubernetes/addons/registry-rc.yaml
	I0207 19:25:03.034080    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0207 19:25:03.035125    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.4458618s)
	I0207 19:25:03.035331    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:03.043711    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:03.048480    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.4226571s)
	I0207 19:25:03.048480    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.473291s)
	I0207 19:25:03.048480    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:03.048480    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:03.111450    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220207192142-8704: (2.475464s)
	I0207 19:25:03.184224    9696 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0207 19:25:03.199262    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:03.385048    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (2.3649726s)
	I0207 19:25:03.385048    9696 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0207 19:25:03.385048    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0207 19:25:03.405146    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:03.488677    9696 pod_ready.go:102] pod "coredns-64897985d-2xfj5" in "kube-system" namespace has status "Ready":"False"
	I0207 19:25:04.000100    9696 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0207 19:25:04.000155    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0207 19:25:04.203568    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0207 19:25:04.289749    9696 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0207 19:25:04.290113    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0207 19:25:04.291998    9696 addons.go:348] installing /etc/kubernetes/addons/olm.yaml
	I0207 19:25:04.292045    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/olm.yaml (9994 bytes)
	I0207 19:25:04.306048    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:25:04.306870    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0207 19:25:04.389540    9696 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0207 19:25:04.389540    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0207 19:25:04.401751    9696 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0207 19:25:04.406252    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0207 19:25:04.593379    9696 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0207 19:25:04.593379    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0207 19:25:04.783644    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.7399237s)
	I0207 19:25:04.783644    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:04.900530    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0207 19:25:04.938740    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.7391251s)
	I0207 19:25:04.938941    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:05.066804    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.6615663s)
	I0207 19:25:05.066940    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:05.090618    9696 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0207 19:25:05.090618    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0207 19:25:05.187079    9696 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0207 19:25:05.187189    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0207 19:25:05.188620    9696 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0207 19:25:05.188620    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0207 19:25:05.388864    9696 addons.go:348] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0207 19:25:05.388864    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0207 19:25:05.788750    9696 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0207 19:25:05.788809    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0207 19:25:05.789145    9696 pod_ready.go:102] pod "coredns-64897985d-2xfj5" in "kube-system" namespace has status "Ready":"False"
	I0207 19:25:05.988614    9696 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0207 19:25:05.988614    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0207 19:25:06.104234    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0207 19:25:06.286945    9696 addons.go:348] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0207 19:25:06.287201    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0207 19:25:06.502010    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0207 19:25:06.590574    9696 addons.go:348] installing /etc/kubernetes/addons/registry-svc.yaml
	I0207 19:25:06.590816    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0207 19:25:06.590574    9696 addons.go:348] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0207 19:25:06.590993    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0207 19:25:06.605711    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0207 19:25:06.690062    9696 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0207 19:25:06.887119    9696 addons.go:348] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0207 19:25:06.887119    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0207 19:25:07.089042    9696 addons.go:348] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0207 19:25:07.089042    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0207 19:25:07.189089    9696 addons.go:153] Setting addon gcp-auth=true in "addons-20220207192142-8704"
	I0207 19:25:07.189089    9696 host.go:66] Checking if "addons-20220207192142-8704" exists ...
	I0207 19:25:07.190917    9696 pod_ready.go:92] pod "coredns-64897985d-2xfj5" in "kube-system" namespace has status "Ready":"True"
	I0207 19:25:07.190986    9696 pod_ready.go:81] duration metric: took 5.9791515s waiting for pod "coredns-64897985d-2xfj5" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.190986    9696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-gsh99" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.206256    9696 cli_runner.go:133] Run: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}
	I0207 19:25:07.287970    9696 addons.go:348] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0207 19:25:07.287970    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0207 19:25:07.389139    9696 pod_ready.go:92] pod "coredns-64897985d-gsh99" in "kube-system" namespace has status "Ready":"True"
	I0207 19:25:07.389139    9696 pod_ready.go:81] duration metric: took 198.1521ms waiting for pod "coredns-64897985d-gsh99" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.389139    9696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.485777    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0207 19:25:07.485777    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0207 19:25:07.500616    9696 pod_ready.go:92] pod "etcd-addons-20220207192142-8704" in "kube-system" namespace has status "Ready":"True"
	I0207 19:25:07.500616    9696 pod_ready.go:81] duration metric: took 111.4759ms waiting for pod "etcd-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.500616    9696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.687403    9696 pod_ready.go:92] pod "kube-apiserver-addons-20220207192142-8704" in "kube-system" namespace has status "Ready":"True"
	I0207 19:25:07.687403    9696 pod_ready.go:81] duration metric: took 186.7867ms waiting for pod "kube-apiserver-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.687492    9696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.712554    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0207 19:25:07.802958    9696 pod_ready.go:92] pod "kube-controller-manager-addons-20220207192142-8704" in "kube-system" namespace has status "Ready":"True"
	I0207 19:25:07.802958    9696 pod_ready.go:81] duration metric: took 115.3354ms waiting for pod "kube-controller-manager-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.802958    9696 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-54gxw" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.804154    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0207 19:25:07.901090    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0207 19:25:07.901090    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0207 19:25:07.988179    9696 pod_ready.go:92] pod "kube-proxy-54gxw" in "kube-system" namespace has status "Ready":"True"
	I0207 19:25:07.988234    9696 pod_ready.go:81] duration metric: took 185.2751ms waiting for pod "kube-proxy-54gxw" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:07.988320    9696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:08.089730    9696 pod_ready.go:92] pod "kube-scheduler-addons-20220207192142-8704" in "kube-system" namespace has status "Ready":"True"
	I0207 19:25:08.089730    9696 pod_ready.go:81] duration metric: took 101.3377ms waiting for pod "kube-scheduler-addons-20220207192142-8704" in "kube-system" namespace to be "Ready" ...
	I0207 19:25:08.089730    9696 pod_ready.go:38] duration metric: took 6.9923383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0207 19:25:08.089730    9696 api_server.go:51] waiting for apiserver process to appear ...
	I0207 19:25:08.100689    9696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 19:25:08.206162    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0207 19:25:08.206269    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0207 19:25:08.631553    9696 cli_runner.go:186] Completed: docker container inspect addons-20220207192142-8704 --format={{.State.Status}}: (1.4251783s)
	I0207 19:25:08.640310    9696 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0207 19:25:08.646700    9696 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704
	I0207 19:25:09.491785    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0207 19:25:09.491785    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0207 19:25:09.786352    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0207 19:25:09.786352    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0207 19:25:10.060462    9696 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220207192142-8704: (1.4136955s)
	I0207 19:25:10.060593    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61971 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-20220207192142-8704\id_rsa Username:docker}
	I0207 19:25:10.286015    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0207 19:25:10.286015    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0207 19:25:10.986835    9696 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0207 19:25:10.986835    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0207 19:25:11.090970    9696 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (12.490403s)
	I0207 19:25:11.090970    9696 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0207 19:25:11.503863    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0207 19:25:12.393049    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.1894385s)
	I0207 19:25:13.284015    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.9779203s)
	I0207 19:25:17.109038    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.8020095s)
	I0207 19:25:17.109148    9696 addons.go:386] Verifying addon ingress=true in "addons-20220207192142-8704"
	I0207 19:25:17.113424    9696 out.go:176] * Verifying ingress addon...
	I0207 19:25:17.129464    9696 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0207 19:25:17.294201    9696 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0207 19:25:17.294201    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:17.892727    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:18.598501    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:18.990521    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:19.390896    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:19.893917    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:20.391858    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:20.994267    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:21.495480    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:21.998101    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:22.889798    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:23.406100    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:23.989657    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:24.491286    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:24.898529    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:25.592113    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:25.991711    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:26.391713    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:27.289432    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:27.692376    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:27.793774    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (22.8931247s)
	W0207 19:25:27.793929    9696 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0207 19:25:27.793982    9696 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0207 19:25:27.793982    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (21.6895823s)
	I0207 19:25:27.794147    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (21.2920261s)
	I0207 19:25:27.794213    9696 addons.go:386] Verifying addon metrics-server=true in "addons-20220207192142-8704"
	I0207 19:25:27.794284    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (21.1884002s)
	I0207 19:25:27.794422    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (20.0817636s)
	W0207 19:25:27.794476    9696 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0207 19:25:27.794580    9696 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0207 19:25:27.794763    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (19.9905059s)
	I0207 19:25:27.794812    9696 addons.go:386] Verifying addon registry=true in "addons-20220207192142-8704"
	I0207 19:25:27.804460    9696 out.go:176] * Verifying registry addon...
	I0207 19:25:27.795588    9696 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (19.6947405s)
	I0207 19:25:27.795667    9696 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (19.1552057s)
	I0207 19:25:27.804602    9696 api_server.go:71] duration metric: took 29.7930086s to wait for apiserver process to appear ...
	I0207 19:25:27.804672    9696 api_server.go:87] waiting for apiserver healthz status ...
	I0207 19:25:27.804742    9696 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61970/healthz ...
	I0207 19:25:27.810450    9696 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I0207 19:25:27.813376    9696 out.go:176]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.8
	I0207 19:25:27.814012    9696 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0207 19:25:27.814084    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0207 19:25:27.822305    9696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0207 19:25:27.989613    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:28.087289    9696 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0207 19:25:28.087596    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:28.094733    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0207 19:25:28.164411    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0207 19:25:28.188359    9696 api_server.go:266] https://127.0.0.1:61970/healthz returned 200:
	ok
	I0207 19:25:28.285608    9696 api_server.go:140] control plane version: v1.23.3
	I0207 19:25:28.285608    9696 api_server.go:130] duration metric: took 480.9334ms to wait for apiserver health ...
	I0207 19:25:28.285608    9696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0207 19:25:28.390047    9696 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0207 19:25:28.390047    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0207 19:25:28.593258    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:28.701820    9696 system_pods.go:59] 15 kube-system pods found
	I0207 19:25:28.701820    9696 system_pods.go:61] "coredns-64897985d-2xfj5" [ebd58b43-0336-40ec-8631-ff41f01bc9e5] Running
	I0207 19:25:28.701820    9696 system_pods.go:61] "coredns-64897985d-gsh99" [e2aedf53-ff00-4cc0-83e6-5c41db105e55] Running
	I0207 19:25:28.701820    9696 system_pods.go:61] "etcd-addons-20220207192142-8704" [7a08079c-5614-454e-a3d8-676eb61d8f43] Running
	I0207 19:25:28.701820    9696 system_pods.go:61] "kube-apiserver-addons-20220207192142-8704" [02b586c8-5098-43f2-9ce3-de6a1223a1c3] Running
	I0207 19:25:28.701820    9696 system_pods.go:61] "kube-controller-manager-addons-20220207192142-8704" [14cf697b-5ec2-49c4-98bf-ad7b6085cfb2] Running
	I0207 19:25:28.701820    9696 system_pods.go:61] "kube-ingress-dns-minikube" [4469bb60-fedc-4b45-85b6-34f38208bbfb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0207 19:25:28.701820    9696 system_pods.go:61] "kube-proxy-54gxw" [b59340ef-8b04-409a-a5bf-fd415f54aa15] Running
	I0207 19:25:28.701820    9696 system_pods.go:61] "kube-scheduler-addons-20220207192142-8704" [69068172-97eb-4ff0-989e-b353ebf1a77c] Running
	I0207 19:25:28.701820    9696 system_pods.go:61] "metrics-server-6b76bd68b6-gc9ll" [6b1f7007-f81a-436b-ab42-07aec87b70e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 19:25:28.701820    9696 system_pods.go:61] "registry-fcmht" [d55bf42e-dc40-4769-95bf-c129acb49503] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0207 19:25:28.701820    9696 system_pods.go:61] "registry-proxy-k42vl" [f1e425ea-9c0c-4b22-8fab-63b7c4581e16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0207 19:25:28.701820    9696 system_pods.go:61] "snapshot-controller-7f76975c56-6cd5t" [c02fd4d9-92dc-449d-bcd7-c3175e7c46fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0207 19:25:28.701820    9696 system_pods.go:61] "snapshot-controller-7f76975c56-kr57t" [24ce149d-f7ee-474e-8798-edd307614191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0207 19:25:28.701820    9696 system_pods.go:61] "storage-provisioner" [c1b72a76-f3d0-4e48-b9d5-203992d36a0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0207 19:25:28.701820    9696 system_pods.go:61] "tiller-deploy-6d67d5465d-hkfc5" [5228fb58-09ee-4131-a27b-df92cd5d78ef] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0207 19:25:28.701820    9696 system_pods.go:74] duration metric: took 416.21ms to wait for pod list to return data ...
	I0207 19:25:28.701820    9696 default_sa.go:34] waiting for default service account to be created ...
	I0207 19:25:28.987213    9696 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0207 19:25:28.987213    9696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4842 bytes)
	I0207 19:25:29.088671    9696 default_sa.go:45] found service account: "default"
	I0207 19:25:29.088719    9696 default_sa.go:55] duration metric: took 386.8968ms for default service account to be created ...
	I0207 19:25:29.088719    9696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0207 19:25:29.092571    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:29.093036    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:29.304652    9696 system_pods.go:86] 15 kube-system pods found
	I0207 19:25:29.304749    9696 system_pods.go:89] "coredns-64897985d-2xfj5" [ebd58b43-0336-40ec-8631-ff41f01bc9e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0207 19:25:29.304749    9696 system_pods.go:89] "coredns-64897985d-gsh99" [e2aedf53-ff00-4cc0-83e6-5c41db105e55] Running
	I0207 19:25:29.304749    9696 system_pods.go:89] "etcd-addons-20220207192142-8704" [7a08079c-5614-454e-a3d8-676eb61d8f43] Running
	I0207 19:25:29.304749    9696 system_pods.go:89] "kube-apiserver-addons-20220207192142-8704" [02b586c8-5098-43f2-9ce3-de6a1223a1c3] Running
	I0207 19:25:29.304749    9696 system_pods.go:89] "kube-controller-manager-addons-20220207192142-8704" [14cf697b-5ec2-49c4-98bf-ad7b6085cfb2] Running
	I0207 19:25:29.304842    9696 system_pods.go:89] "kube-ingress-dns-minikube" [4469bb60-fedc-4b45-85b6-34f38208bbfb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0207 19:25:29.304842    9696 system_pods.go:89] "kube-proxy-54gxw" [b59340ef-8b04-409a-a5bf-fd415f54aa15] Running
	I0207 19:25:29.304842    9696 system_pods.go:89] "kube-scheduler-addons-20220207192142-8704" [69068172-97eb-4ff0-989e-b353ebf1a77c] Running
	I0207 19:25:29.304842    9696 system_pods.go:89] "metrics-server-6b76bd68b6-gc9ll" [6b1f7007-f81a-436b-ab42-07aec87b70e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 19:25:29.305341    9696 system_pods.go:89] "registry-fcmht" [d55bf42e-dc40-4769-95bf-c129acb49503] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0207 19:25:29.305822    9696 system_pods.go:89] "registry-proxy-k42vl" [f1e425ea-9c0c-4b22-8fab-63b7c4581e16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0207 19:25:29.306221    9696 system_pods.go:89] "snapshot-controller-7f76975c56-6cd5t" [c02fd4d9-92dc-449d-bcd7-c3175e7c46fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0207 19:25:29.306506    9696 system_pods.go:89] "snapshot-controller-7f76975c56-kr57t" [24ce149d-f7ee-474e-8798-edd307614191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0207 19:25:29.306612    9696 system_pods.go:89] "storage-provisioner" [c1b72a76-f3d0-4e48-b9d5-203992d36a0e] Running
	I0207 19:25:29.306612    9696 system_pods.go:89] "tiller-deploy-6d67d5465d-hkfc5" [5228fb58-09ee-4131-a27b-df92cd5d78ef] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0207 19:25:29.306612    9696 system_pods.go:126] duration metric: took 217.8917ms to wait for k8s-apps to be running ...
	I0207 19:25:29.306612    9696 system_svc.go:44] waiting for kubelet service to be running ....
	I0207 19:25:29.318152    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:25:29.594087    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:29.598519    9696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0207 19:25:29.691666    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:29.994069    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:30.190910    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:30.393363    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:30.687205    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:30.892469    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:31.299165    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:31.390189    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:31.797033    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:31.996317    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:32.287923    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:32.489764    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:32.690787    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:32.896512    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:33.392309    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:33.393257    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:33.989576    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:33.990923    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:34.292250    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:34.490923    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:34.787951    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:34.993710    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:35.191636    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:35.283634    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (23.7796469s)
	I0207 19:25:35.283634    9696 addons.go:386] Verifying addon csi-hostpath-driver=true in "addons-20220207192142-8704"
	I0207 19:25:35.291688    9696 out.go:176] * Verifying csi-hostpath-driver addon...
	I0207 19:25:35.310844    9696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0207 19:25:35.497356    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:35.497934    9696 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0207 19:25:35.497934    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:35.696758    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:35.989581    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:36.099735    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:36.197948    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:36.392193    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:36.595365    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:36.690740    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:36.892121    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:37.100852    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:37.188983    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:37.390278    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:37.691662    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:37.695629    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:37.889575    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:38.099837    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:38.192875    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:38.391496    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:38.597183    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:38.601296    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:38.891613    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:39.095799    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:39.187600    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:39.390802    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:39.697184    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:39.989002    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:39.992325    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:40.191065    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:40.198867    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:40.493780    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:40.790765    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:40.797306    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:40.992587    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:41.100622    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:41.191122    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:41.592731    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:41.793593    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:41.795207    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:41.991020    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:42.493211    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:42.493211    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:42.499287    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:42.791650    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:42.989869    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:43.115972    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:43.193014    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:43.390490    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:43.598303    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:43.606487    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:43.808042    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:44.099968    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:44.186018    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:44.389141    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:44.598243    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:44.687282    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:44.887088    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:45.095278    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:45.197744    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:45.389545    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:45.599539    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:45.688235    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:45.892326    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:46.108071    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:46.192391    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:46.391662    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:46.596278    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:46.690429    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:46.891157    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:47.099287    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:47.190471    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:47.396670    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:47.598652    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:47.686834    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:47.697389    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (19.532845s)
	I0207 19:25:47.887318    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:48.094387    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:48.196954    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:48.388582    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:48.526304    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:48.692695    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:48.896326    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:49.093478    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:49.102204    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:49.308915    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:49.596009    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:49.695119    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:49.895383    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (21.8004715s)
	I0207 19:25:49.895383    9696 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (20.5771245s)
	I0207 19:25:49.895537    9696 system_svc.go:56] duration metric: took 20.5888184s WaitForService to wait for kubelet.
	I0207 19:25:49.895537    9696 kubeadm.go:547] duration metric: took 51.8838582s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0207 19:25:49.895621    9696 node_conditions.go:102] verifying NodePressure condition ...
	I0207 19:25:49.895537    9696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (20.2968496s)
	I0207 19:25:49.899401    9696 addons.go:386] Verifying addon gcp-auth=true in "addons-20220207192142-8704"
	I0207 19:25:49.902928    9696 out.go:176] * Verifying gcp-auth addon...
	I0207 19:25:49.906167    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:49.917332    9696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0207 19:25:49.987070    9696 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0207 19:25:49.987070    9696 node_conditions.go:123] node cpu capacity is 16
	I0207 19:25:49.987697    9696 node_conditions.go:105] duration metric: took 92.0296ms to run NodePressure ...
	I0207 19:25:49.987800    9696 start.go:213] waiting for startup goroutines ...
	I0207 19:25:49.990413    9696 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0207 19:25:49.990413    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:50.087060    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:50.105774    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:50.305786    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:50.500511    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:50.522129    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:50.685931    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:50.886571    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:51.005130    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:51.020467    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:51.115388    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:51.311906    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:51.584430    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:51.593944    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:51.604752    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:51.810104    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:52.002679    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:52.088392    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:52.104064    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:52.386257    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:52.514320    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:52.520409    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:52.613174    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:52.813548    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:53.006181    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:53.022971    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:53.119990    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:53.319676    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:53.590342    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:53.590540    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:53.603008    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:53.812853    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:54.004631    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:54.015746    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:54.110347    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:54.312128    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:54.506135    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:54.591028    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:54.604546    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:54.817834    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:55.014557    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:55.020019    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:55.110220    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:55.318729    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:55.515161    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:55.586187    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:55.612756    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:55.818921    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:56.011653    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:56.093338    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:56.108969    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:56.318935    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:56.502616    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:56.589128    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:56.604926    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:56.889912    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:57.014856    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:57.025785    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:57.109016    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:57.316298    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:57.516168    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:57.524052    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:57.611154    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:57.884266    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:58.083143    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:58.104361    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:58.185029    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:58.388623    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:58.589763    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:58.602768    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:58.686709    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:58.884786    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:59.087032    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:59.097126    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:59.108830    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:59.317053    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:25:59.516165    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:25:59.597265    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:25:59.615135    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:25:59.808494    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:00.001366    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:00.092046    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:00.185328    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:00.319242    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:00.510388    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:00.525795    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:00.690599    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:00.825700    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:01.089682    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:01.098528    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:01.188040    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:01.320126    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:01.516778    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:01.522915    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:01.608905    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:01.812743    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:02.020512    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:02.026963    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:02.119032    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:02.311596    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:03.840273    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:03.843535    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:03.846014    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:03.847557    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:04.019766    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:04.028190    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:04.107813    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:04.321165    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:04.515969    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:04.596250    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:04.684957    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:04.889176    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:05.008661    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:05.096647    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:05.199106    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:05.314661    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:05.589310    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:05.611040    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:05.688992    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:05.895966    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:06.090080    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:06.097319    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:06.209600    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:06.396485    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:06.593465    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:06.605503    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:06.609529    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:06.893392    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:07.006660    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:07.026034    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:07.110970    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:07.310045    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:07.586253    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:07.596917    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:07.608791    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:07.819907    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:08.011250    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:08.089790    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:08.112584    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:08.390320    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:08.505664    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:08.593852    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:08.687914    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:08.887385    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:09.088793    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:09.097990    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:09.186709    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:09.308528    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:09.503567    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:09.521421    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:09.610473    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:09.887182    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:10.012372    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:10.091751    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:10.107040    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:10.315400    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:10.504525    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:10.591959    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:10.612092    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:10.823934    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:11.012215    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:11.086434    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:11.116785    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:11.385628    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:11.515773    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:11.521052    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:11.688553    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:11.819773    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:12.009410    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:12.092682    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:12.109774    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:12.385388    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:12.504567    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:12.594283    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:12.605034    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:12.889220    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:13.017700    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:13.028229    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:13.147447    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:14.147435    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:14.147435    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:14.148739    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:14.150872    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:14.323334    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:14.528787    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:14.588512    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:14.609245    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:14.816156    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:15.002589    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:15.101673    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:15.194438    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:15.390443    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:15.509424    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:15.600583    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:15.610897    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:15.815436    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:16.102072    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:16.108701    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:16.190447    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:16.389628    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:16.691036    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:16.692672    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:16.698069    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:16.893608    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:17.092984    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:17.191920    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:17.195847    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:17.388712    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:17.588963    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:17.595026    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:17.602312    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:17.884941    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:18.010087    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:18.018080    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:18.116495    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:18.319012    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:18.516707    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:18.586423    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:18.608331    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:18.885790    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:19.004872    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:19.088125    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:19.190425    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:19.394575    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:19.589347    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:19.594735    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:19.604992    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:19.807611    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:20.013766    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:20.018487    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:20.107421    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:20.309017    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:20.510379    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:20.595152    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:20.687543    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:20.887553    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:21.013922    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:21.094978    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:21.105590    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:21.314666    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:21.515286    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:21.524077    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:22.136177    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:22.138779    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:22.144065    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:22.144868    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:22.317741    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:22.512654    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:22.518476    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:22.611061    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:22.884461    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:23.022300    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:23.034985    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:23.119225    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:23.313544    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:23.507153    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:23.513707    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:23.621255    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:23.824801    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:24.191675    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:24.193761    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:24.199085    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:24.400913    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:24.689005    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:24.697770    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:24.785430    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:24.895158    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:25.090412    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:25.097732    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:25.187845    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:25.311848    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:25.511568    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:25.593774    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:25.686804    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:25.825267    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:26.011754    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:26.020089    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:26.187649    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:26.387896    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:26.501811    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:26.519967    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:26.610871    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:26.808220    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:27.012552    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:27.022196    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:27.186119    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:27.311881    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:27.513301    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:27.518637    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:27.602892    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:27.820162    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:28.014255    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:28.097103    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:28.401909    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:28.408150    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:28.530225    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:28.535509    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:28.606547    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:28.824687    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:29.088587    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:29.103626    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:29.192258    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:29.315695    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:29.588173    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:29.602883    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:29.606503    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:29.896027    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:30.086368    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:30.105348    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:30.199430    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:30.394813    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:30.589514    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:30.602190    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:30.614673    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:30.889697    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:31.009492    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:31.096080    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:31.117238    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:31.400240    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:31.586974    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:31.602390    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:31.686301    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:31.827159    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:32.014611    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:32.022222    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:32.118380    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:32.324189    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:32.522735    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:32.530174    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:32.685590    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:32.819827    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:33.004547    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:33.092839    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:33.289335    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:33.390853    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:33.504937    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:33.584905    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:33.609277    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:33.814578    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:34.089119    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:34.102001    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:34.110690    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:34.325513    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:34.587736    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:34.604273    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:34.686273    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:34.816709    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:35.004527    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:35.087286    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:35.113092    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:35.306863    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:35.591831    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:35.602049    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:35.606182    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:35.818392    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:36.083872    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:36.101071    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:36.113422    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:36.314048    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:36.514849    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:36.521756    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:36.618908    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:36.811800    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:37.006052    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:37.013876    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:37.112244    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:37.318510    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:37.516993    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:37.584122    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:37.606963    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:37.810948    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:38.005082    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:38.091413    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:38.113858    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:38.320770    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:38.596400    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:38.603964    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:38.686459    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:38.813119    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:39.020121    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:39.024823    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:39.106151    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:39.314863    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:39.513235    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:39.588575    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:39.603986    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:39.887086    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:40.085509    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:40.091637    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:40.105374    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:40.383274    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:40.509724    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:40.592117    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:40.606742    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:40.885480    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:41.085768    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:41.099917    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:41.189542    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:41.315991    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:41.508614    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:41.596658    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:41.613891    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:41.887153    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:42.007361    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:42.088158    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:42.101684    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:42.307639    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:42.508962    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:42.593039    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:42.605074    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:42.885072    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:43.012679    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:43.091631    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:43.105871    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:43.309124    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:43.500383    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:43.516071    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:43.611064    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:43.895130    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:44.001193    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:44.017001    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:44.102147    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:44.305859    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:44.587178    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:44.594350    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:44.605854    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:44.885789    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:45.009034    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:45.024799    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:45.106710    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:45.307113    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:45.513502    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:45.522467    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:45.603120    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:45.886462    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:46.087958    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:46.100762    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:46.186674    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:46.391820    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:46.518226    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:46.591958    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:46.602226    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:46.889213    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:47.013447    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:47.019202    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:47.108901    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:47.322549    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:47.589861    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:47.597427    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:47.602090    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:47.815183    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:48.003621    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:48.087311    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:48.194011    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:48.320021    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:48.507376    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:48.514186    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:48.688208    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:48.805520    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:49.005535    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:49.027191    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:49.108459    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:49.308817    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:49.517419    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:49.587016    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:49.609706    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:49.811200    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:50.020043    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:50.092436    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:50.106297    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:50.384601    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:50.504815    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:50.598440    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:50.686116    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:50.819272    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:51.014072    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:51.033732    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:51.101337    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:51.324219    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:51.507646    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:51.597705    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:51.611292    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:51.809328    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:52.035515    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:52.039864    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:52.114265    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:53.537391    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:53.542821    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:53.556420    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:53.587016    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:54.444505    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:54.445162    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:54.446379    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:54.454875    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:54.501628    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:54.525674    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:54.609573    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:54.824889    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:55.008040    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:55.024442    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:55.109351    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:55.316167    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:55.519336    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:55.536184    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:55.623452    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:55.816508    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:56.017446    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:56.025144    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:56.189367    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:56.393465    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:56.590271    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:56.605904    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:56.792063    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0207 19:26:56.985598    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:57.091601    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:57.197366    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:57.200280    9696 kapi.go:108] duration metric: took 1m29.3775127s to wait for kubernetes.io/minikube-addons=registry ...
	I0207 19:26:57.306675    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:57.515266    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:57.518495    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:57.824093    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:58.011611    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:58.087553    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:58.314895    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:58.508456    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:58.586717    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:58.817262    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:59.084780    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:59.098304    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:59.386728    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:26:59.499511    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:26:59.590377    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:26:59.887537    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:00.084985    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:00.099162    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:00.320379    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:00.529930    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:00.544062    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:01.161926    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:01.162423    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:01.167598    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:01.325294    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:01.509248    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:01.536796    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:01.891867    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:02.089084    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:02.106775    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:02.392352    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:02.686049    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:02.693733    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:02.892144    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:03.188784    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:03.199765    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:03.320161    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:03.595285    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:03.604127    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:03.898384    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:04.085512    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:04.105119    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:04.394184    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:04.592420    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:04.600514    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:04.892092    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:05.201539    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:05.208173    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:05.397372    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:05.594391    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:05.601412    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:05.890098    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:06.008944    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:06.097501    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:06.390579    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:06.508726    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:06.590050    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:06.817535    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:07.002302    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:07.093790    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:07.312197    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:07.587449    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:07.601414    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:07.907628    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:08.089351    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:08.105199    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:08.315690    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:08.507198    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:08.596157    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:09.207465    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:09.208619    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:09.215392    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:09.320488    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:09.526732    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:09.537986    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:09.828727    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:10.004418    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:10.033202    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:10.388762    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:10.600597    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:10.703955    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:10.824219    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:11.006618    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:11.091429    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:11.322985    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:11.518603    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:11.532588    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:11.887575    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:12.013707    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:12.103907    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:12.399452    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:12.705493    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:12.796288    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:12.996864    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:13.091415    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:13.100126    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:13.392520    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:13.590679    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:13.591413    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:13.894464    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:14.008722    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:14.097299    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:14.315352    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:14.591531    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:14.602824    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:14.815156    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:15.092298    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:15.103515    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:15.388508    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:15.504621    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:15.608267    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:15.812594    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:16.018344    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:16.092670    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:16.392972    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:16.586763    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:16.595134    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:16.897779    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:17.089253    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:17.101584    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:17.386809    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:17.504300    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:17.590417    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:17.813746    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:18.011197    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:18.088233    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:18.314124    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:18.591164    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:18.600386    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:18.821223    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:19.008692    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:19.095584    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:19.318974    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:19.501269    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:19.595344    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:19.815600    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:20.003496    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:20.030147    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:20.313681    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:20.505803    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:20.594063    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:20.819645    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:21.014232    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:21.018127    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:21.390983    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:21.522547    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:21.524054    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:21.885844    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:22.010077    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:22.091242    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:22.392855    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:22.586912    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:22.592548    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:22.892927    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:23.024837    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:23.035121    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:23.316975    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:23.512563    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:23.593065    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:23.815096    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:24.013381    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:24.017819    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:24.384660    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:24.504752    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:24.594953    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:24.820384    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:25.002004    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:25.089052    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:25.388699    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:25.514185    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:25.588263    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:25.809479    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:26.018600    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:26.032109    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:26.321511    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:26.514063    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:26.522849    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:26.812684    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:27.020574    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:27.083837    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:27.387975    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:27.587162    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:27.598150    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:27.825734    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:28.805767    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:28.807047    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:28.820051    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:29.245435    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:29.251828    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:29.325290    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:30.197303    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:30.198928    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:30.211139    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:30.463751    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:30.506630    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:30.525132    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:30.887098    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:31.099022    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:31.105026    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:31.391127    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:31.590618    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:31.607467    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:31.899711    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:32.092956    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:32.105607    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:32.396631    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:32.596957    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:32.693149    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:32.894866    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:33.088581    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:33.098079    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:33.395066    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:33.691289    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:33.789824    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:33.891280    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:34.091994    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:34.104475    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:34.388094    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:34.593354    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:34.601282    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:35.097673    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:35.098416    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:35.190140    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:35.395219    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:35.600806    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:35.687963    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:36.193461    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:36.193461    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:36.196946    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:36.593416    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:36.790648    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:36.795911    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:36.996970    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:37.191720    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:37.193624    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:37.391522    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:37.689319    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:37.693017    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:37.997599    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:38.086036    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:38.099727    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:38.394652    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:38.591529    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:38.597343    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:38.988742    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:39.087922    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:39.187419    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:39.504575    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:39.602966    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:39.792776    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:40.089375    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:40.186752    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:40.293854    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:40.792673    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:40.793002    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:40.797994    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:40.996592    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:41.188327    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:41.197121    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:41.390096    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:41.596715    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:41.600122    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:41.893870    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:42.187465    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:42.200979    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:42.489421    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:42.587455    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:42.686958    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:42.890981    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:43.087539    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:43.097149    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:43.388801    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:43.586192    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:43.597031    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:43.890605    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:44.004835    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:44.098485    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:44.389008    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:44.584914    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:44.601713    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:44.889633    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:45.088674    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:45.094994    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:45.396455    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:45.507233    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:45.597961    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:45.904743    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:46.186305    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:46.197911    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:46.310415    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:46.503212    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:46.598645    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:46.805884    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:46.999936    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:47.012928    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:47.307040    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:47.503071    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:47.590532    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:47.808819    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:48.005632    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:48.090784    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:48.397850    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:48.500584    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:48.592449    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:48.890750    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:49.084638    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:49.096197    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:49.305819    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:49.504906    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:49.594951    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:49.805722    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:50.007869    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:50.010894    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:50.307727    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:50.504936    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:50.595433    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:50.810862    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:51.003623    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:51.090721    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:51.309600    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:51.499129    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:51.511144    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:51.941936    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:51.998725    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:52.013361    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:52.306383    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:52.503834    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:52.514310    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:52.809005    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:53.085642    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:53.091644    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:53.312173    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:53.502225    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:53.520230    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:53.805311    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:54.002313    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:54.014311    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:54.307060    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:54.498787    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:54.514441    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:54.808278    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:54.999581    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:55.013065    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:55.390013    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:55.505528    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:55.514355    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:55.810509    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:56.005440    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:56.099011    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:56.388986    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:56.500370    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:56.593924    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:56.804483    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:56.998783    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:57.094587    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:57.307637    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:57.503875    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:57.592709    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:57.806752    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:58.002170    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:58.022988    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:58.434881    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:58.503036    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:58.517227    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:58.805908    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:59.003829    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:59.017586    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:59.308379    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:27:59.502913    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:27:59.591998    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:27:59.808932    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:00.007685    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:00.094894    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:00.308405    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:00.498221    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:00.517213    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:00.807203    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:01.001431    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:01.015267    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:01.397256    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:01.503185    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:01.592988    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:01.807385    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:02.002108    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:02.091612    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:02.307084    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:02.584849    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:02.603815    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:02.805475    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:03.001100    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:03.091297    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:03.304897    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:03.502360    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:03.590481    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:03.889915    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:04.001382    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:04.090870    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:04.389760    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:04.502799    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:04.588288    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:04.806815    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:05.003710    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:05.013391    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:05.306192    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:05.498839    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:05.527077    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:05.816212    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:05.999060    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:06.023134    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:06.313874    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:06.502989    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:06.597056    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:06.890182    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:07.000544    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:07.019884    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:07.307852    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:07.499786    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:07.514782    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:07.807522    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:07.999209    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:08.013164    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:08.387636    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:08.500076    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:08.589607    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:08.805702    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:09.002813    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:09.013806    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:09.307283    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:09.501009    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:09.594685    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:09.894027    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:10.013279    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:10.090372    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:10.387384    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:10.585417    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:10.589541    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:10.886980    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:11.003798    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:11.091223    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:11.309874    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:11.944962    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:11.947661    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:11.951547    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:12.016174    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:12.024375    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:12.308715    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:12.502874    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:12.513209    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:12.806442    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:13.002776    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:13.091304    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:13.391815    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:13.514171    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:13.522610    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:13.807171    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:14.004123    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:14.088599    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:14.387684    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:14.591556    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:14.596326    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:14.890821    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:15.189882    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:15.294860    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:15.388487    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:15.585482    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:15.594870    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:15.804964    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:15.999912    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:16.019051    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:16.306987    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:16.500357    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:16.514945    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:16.809693    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:17.084280    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:17.097066    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:17.306212    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:17.499735    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:17.516127    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:17.805679    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:18.003783    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:18.090444    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:18.388401    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:18.501603    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:18.593579    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:18.805069    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:19.002452    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:19.091463    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:19.315089    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:19.500139    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:19.520425    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:20.101783    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:20.102167    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:20.106538    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:20.394600    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:20.504233    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:20.516461    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:20.888349    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:21.006005    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:21.089857    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:21.309373    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:21.586469    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:21.605171    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:21.809189    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:22.002789    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:22.101253    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:22.506302    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:22.509316    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:22.595479    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:22.813742    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:23.002007    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:23.014015    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:23.306675    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:23.585448    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:23.597466    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:23.903450    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:23.999456    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:24.099451    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:24.304432    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:24.502437    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:24.595462    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:24.807458    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:25.011275    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:25.101968    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:25.306219    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:25.500216    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:25.514227    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:25.808972    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:26.005419    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:26.094204    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:26.392463    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:26.500828    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:26.598495    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:26.891312    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:27.003825    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:27.094260    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:27.397086    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:27.502886    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:27.603218    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:27.806735    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:28.084868    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:28.101912    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:28.387000    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:28.503889    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:28.590099    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:28.807102    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:29.086271    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:29.093019    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:29.305613    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:29.504135    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:29.512270    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:29.806643    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:30.000715    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:30.092610    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:30.386194    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:30.503323    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:30.592205    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:30.806206    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:31.000394    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:31.018570    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:31.309282    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:31.501730    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:31.594285    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:31.891935    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:31.999808    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:32.090870    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:32.390482    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:32.586120    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:32.594119    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:32.806665    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:33.007280    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:33.098614    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:33.306249    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:33.502359    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:33.591015    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:33.806091    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:34.001790    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:34.094867    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:34.309102    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:34.503176    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:34.592585    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:34.808928    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:35.000409    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:35.091783    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:35.306504    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:35.504351    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:35.597934    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:35.808134    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:36.001861    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:36.094040    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:36.307123    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:36.501340    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:36.593141    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:36.805636    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:37.001875    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:37.015500    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:37.307159    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:37.598181    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:37.603853    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:37.888265    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:38.006726    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:38.090966    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:38.309022    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:38.501627    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:38.592346    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:38.809267    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:39.000932    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:39.091054    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:39.308213    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:39.503027    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:39.586305    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:39.807061    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:39.999366    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:40.093778    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:40.387181    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:40.501661    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:40.594953    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:40.894217    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:41.093263    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:41.105008    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:41.304910    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:41.502022    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:41.517843    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:41.808519    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:42.005357    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:42.092153    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:42.307697    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:42.510457    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:42.596617    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:42.805098    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:43.085763    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:43.098910    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:43.308852    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:43.504346    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:43.593168    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:43.806348    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:43.999791    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:44.092920    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:44.309756    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:44.499749    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:44.515500    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:44.806655    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:45.004025    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:45.014506    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:45.309799    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:45.504457    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:45.519154    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:45.809201    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:46.003270    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:46.085530    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:46.306353    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:46.506008    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:46.516243    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:46.887417    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:47.003784    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:47.095747    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:47.388496    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:47.584348    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:47.606266    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:47.888009    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:48.002672    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:48.097293    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:48.391733    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:48.501770    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:48.589695    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:48.805565    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:49.003993    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:49.090867    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:49.306831    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:49.502179    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:49.518086    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:49.809044    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:49.999117    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:50.018406    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:50.307615    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:50.503511    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:50.587608    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:50.806226    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:51.000743    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:51.092116    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:51.389747    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:51.502851    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:51.591685    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:52.707476    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:52.708780    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:52.712267    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:52.888666    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:53.002620    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:53.091736    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:53.388950    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:53.500002    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:53.597607    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:53.892064    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:53.999337    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:54.022435    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:54.307701    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:54.500680    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:54.515275    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:54.805568    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:56.089258    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:56.090381    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:56.102051    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:56.666688    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:56.666935    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:56.682806    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:56.808519    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:57.003522    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:57.016358    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:57.314719    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:57.499867    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:57.591508    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:57.888240    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:58.000163    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:58.016738    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:58.306527    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:58.501093    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:58.514456    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:58.810770    9696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0207 19:28:59.002989    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:59.112331    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:28:59.396746    9696 kapi.go:108] duration metric: took 3m42.2657537s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0207 19:28:59.592670    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:28:59.607847    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:00.001441    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:00.089394    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:00.517900    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:00.525564    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:01.000715    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:01.011698    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:01.502263    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:01.600790    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:02.002572    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:02.202408    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:02.597377    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:02.605575    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:03.190360    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:03.195913    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:03.588879    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:03.693310    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:04.000351    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:04.097098    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:04.502110    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:04.595526    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:05.001070    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:05.094635    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:05.502758    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:05.592007    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:06.003360    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:06.100469    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:06.498125    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:06.595060    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:07.000258    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:07.098818    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:07.502670    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:07.592428    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:08.002562    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:08.092586    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:08.505387    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:08.593640    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:09.001970    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:09.099519    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:09.499948    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:09.515537    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:10.004106    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:10.594096    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:10.600375    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:11.088225    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:11.093332    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:11.504294    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:11.599489    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:12.085779    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:12.195855    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:12.500924    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:12.592226    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:13.005780    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:13.014370    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:13.507640    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:13.594492    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:14.004829    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:14.092376    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:14.500424    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:14.593145    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:15.086964    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:15.092160    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:15.507812    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:15.518296    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:15.999971    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:16.087684    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:16.597616    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:16.692789    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:17.010560    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:17.020829    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:17.506813    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:17.515640    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:17.999044    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:18.012034    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:18.502815    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:18.532809    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:19.003811    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:19.086808    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:19.507474    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:19.587689    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:20.008914    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:20.018308    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:20.508141    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:20.592000    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:21.006694    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:21.024049    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:21.501581    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:21.519813    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:22.005794    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:22.019263    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:22.505927    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:22.519200    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:23.086466    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:23.097379    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:23.501489    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:23.796523    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:24.001478    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:24.015133    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:24.501821    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:24.514380    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:25.006191    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:25.013758    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:25.503221    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:25.517306    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:26.000902    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:26.018929    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:26.507496    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:26.518559    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:27.003756    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:27.017050    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0207 19:29:27.519207    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:27.523468    9696 kapi.go:108] duration metric: took 3m52.2114179s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0207 19:29:28.002907    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:28.504118    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:29.007091    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:29.502367    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:30.006207    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:30.503107    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:31.005269    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:31.501516    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:32.006202    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:32.500076    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:33.004212    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:33.500568    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:34.001251    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:34.499928    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:35.001943    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:35.506517    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:36.000555    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:36.500196    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:37.006499    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:37.503677    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:38.004655    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:38.502821    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:39.005042    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:39.500619    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:40.004471    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:40.502257    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:41.003203    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:41.501576    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:42.000962    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:42.498921    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:43.001697    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:43.503862    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:44.001541    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:44.503826    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:45.000074    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:45.500731    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:46.000812    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:46.506009    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:47.006159    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:47.502683    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:47.999734    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:48.510419    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:49.003139    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:49.506586    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:50.005263    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:50.503697    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:50.999934    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:51.502569    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:52.001486    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:52.500244    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:53.003568    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:53.498959    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:54.002633    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:54.503310    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:55.003884    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:55.505365    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:56.004095    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:56.501279    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:57.002874    9696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0207 19:29:57.500677    9696 kapi.go:108] duration metric: took 4m7.5820578s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0207 19:29:57.504123    9696 out.go:176] * Your GCP credentials will now be mounted into every pod created in the addons-20220207192142-8704 cluster.
	I0207 19:29:57.506572    9696 out.go:176] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0207 19:29:57.508687    9696 out.go:176] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0207 19:29:57.511220    9696 out.go:176] * Enabled addons: ingress-dns, storage-provisioner, helm-tiller, metrics-server, default-storageclass, volumesnapshots, olm, registry, ingress, csi-hostpath-driver, gcp-auth
	I0207 19:29:57.511220    9696 addons.go:417] enableAddons completed in 4m59.4982535s
	I0207 19:29:58.342296    9696 start.go:496] kubectl: 1.18.2, cluster: 1.23.3 (minor skew: 5)
	I0207 19:29:58.345951    9696 out.go:176] 
	W0207 19:29:58.346291    9696 out.go:241] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.3.
	I0207 19:29:58.350827    9696 out.go:176]   - Want kubectl v1.23.3? Try 'minikube kubectl -- get pods -A'
	I0207 19:29:58.352961    9696 out.go:176] * Done! kubectl is now configured to use "addons-20220207192142-8704" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-02-07 19:23:48 UTC, end at Mon 2022-02-07 19:35:18 UTC. --
	Feb 07 19:31:12 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:12.337209600Z" level=info msg="ignoring event" container=eb5f6252c25deebcce4726908420abe0de664955fe090d88ba69b15cf27d85b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:20 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:20.178408600Z" level=info msg="ignoring event" container=d5cf09503761dc6711a77ee76a27dfef85edd4edaa5ea400d05714475394c05e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:20 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:20.981643300Z" level=info msg="ignoring event" container=c836d4cf94454d51cbb7f1370159315fec4018e13226c413fe789b98f05149b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:20 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:20.982473400Z" level=info msg="ignoring event" container=dce0004fc4dd2bc24882435511b3edfd7e1a9f54e27abe4e9f1a611f50793374 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:21 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:21.093275000Z" level=info msg="ignoring event" container=35479d90a8c8ec046cf536b8ef3b8a6317e3eb570ab3ccb9e25c543cfb087d8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:21 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:21.181218200Z" level=info msg="ignoring event" container=d9a103eefa2e2de47a8081ec1baddad82bf0d11704ebd11878a47e3ed4183fc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:21 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:21.181373100Z" level=info msg="ignoring event" container=4e1df8f9c4cf867e7d3d3d463b164251cf2ecea72d623630008452434611565d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:21 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:21.182810100Z" level=info msg="ignoring event" container=03c843ece6c4a60844c764be23de1f2e194065b96dd9e6786525c7602160c067 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:21 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:21.183244500Z" level=info msg="ignoring event" container=383f29a04302af2f990ad1433a96baeaf0310cf4e2fa747320d7601a9dd841f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:21 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:21.701174400Z" level=info msg="ignoring event" container=3ff807d0e2841adf47f7bfcada761da831ca0ed1854c1f7f52a78bbcfbf2b9dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:21 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:21.779953100Z" level=info msg="ignoring event" container=d8378c74272cab11656ffb852d64fd1cad6c0c5aa56eb839b887aa0025b15bc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:22 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:22.478724700Z" level=info msg="ignoring event" container=6cefc593f150ffc33d19a7b725c59361a7b58323ef0c3628ffdf9bc3dba62640 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:22 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:22.479518100Z" level=info msg="ignoring event" container=57fb096c13eed002755f4304d9ddabc7d3cf8305179da6f7e241b87e4128ffb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:22 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:22.589837500Z" level=info msg="ignoring event" container=6358d83589f408cdd0d37588994dbb3535ded221c63ca07f8714518ba5a583f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:22 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:22.690298300Z" level=info msg="ignoring event" container=7c42014cae792ee2de0118477c57a8db607759d88568e999a84db9db2f35fbf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:33 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:33.878208400Z" level=info msg="ignoring event" container=4874e14f7c7c03f787434a766d88ccadc079cd19b8773804a5c79d02ddb2937b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:33 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:33.986678300Z" level=info msg="ignoring event" container=56975f256f4ae470b4a647b7914b4b4440274be14a9027fea36c62de5bad8712 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:34 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:34.596684800Z" level=info msg="ignoring event" container=6ed56b3e451bbc93542edeaedb67e9ed45f3850252dc1dbef3aa0d4f9bdd733b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:34 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:34.597246300Z" level=info msg="ignoring event" container=d48c3f4791feb7da7ef79466749b20e33f7470499e1a7c1ef569f0688e5d45b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:31:55 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:55.426008200Z" level=warning msg="reference for unknown type: " digest="sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4" remote="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 07 19:31:55 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:31:55.897530600Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
	Feb 07 19:34:36 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:34:36.512640400Z" level=warning msg="reference for unknown type: " digest="sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4" remote="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 07 19:34:37 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:34:37.025059600Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
	Feb 07 19:35:03 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:35:03.603813400Z" level=info msg="ignoring event" container=8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:35:03 addons-20220207192142-8704 dockerd[473]: time="2022-02-07T19:35:03.853908000Z" level=info msg="ignoring event" container=27a5cf8474621f3476b5ecfdb846aa42132eb0bda25e0abf395c2d16218fa04f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID
	41473086964f5       nginx@sha256:da9c94bec1da829ebd52431a84502ec471c8e548ffb2cedbf36260fd9bd1d4d3                                           5 minutes ago       Running             nginx                     0                   e185a4dc69c77
	44912ebf98687       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:26c7b2454f1c946d7c80839251d939606620f37c2f275be2796c1ffd96c438f6            5 minutes ago       Running             gcp-auth                  0                   e6bfeb59caaf0
	607bb2139bcd3       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  6 minutes ago       Running             packageserver             0                   d2006ccd6df67
	f8e92a6fe9cb3       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  6 minutes ago       Running             packageserver             0                   e54e4f3c9d089
	c793934a34aa0       k8s.gcr.io/ingress-nginx/controller@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de             6 minutes ago       Running             controller                0                   3d59639edf1db
	bbcf8a1c8e225       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  7 minutes ago       Running             olm-operator              0                   aadc4275c89ea
	6db39f8547ada       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  7 minutes ago       Running             catalog-operator          0                   2377efaac1223
	4214ed3535486       c41e9fcadf5a2                                                                                                           8 minutes ago       Exited              patch                     1                   2096feaf58515
	8139c0919b2ff       k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660   8 minutes ago       Exited              create                    0                   26c25d074e1d4
	ef819ec55d3d0       gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da    8 minutes ago       Running             registry-proxy            0                   a374c641e0a81
	34fd3cead592f       registry@sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa                                        8 minutes ago       Running             registry                  0                   7278de6dc1803
	072d5f1e249a7       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f        9 minutes ago       Running             minikube-ingress-dns      0                   cb06e348653e0
	bd6ea2628e4f3       6e38f40d628db                                                                                                           10 minutes ago      Running             storage-provisioner       0                   1805315a10b8f
	eac108624350c       a4ca41631cc7a                                                                                                           10 minutes ago      Running             coredns                   0                   66f459238acc4
	2a41637cba689       9b7cc99821098                                                                                                           10 minutes ago      Running             kube-proxy                0                   d288b6bf264b6
	26a3ed9e43d5c       f40be0088a83e                                                                                                           10 minutes ago      Running             kube-apiserver            0                   ad9d54a500f11
	d4250f1a910c0       b07520cd7ab76                                                                                                           10 minutes ago      Running             kube-controller-manager   0                   fab1398d7fae0
	89bf167c795b2       99a3486be4f28                                                                                                           10 minutes ago      Running             kube-scheduler            0                   21655cbfa1d8c
	bcad8be8fa1ba       25f8c7f3da61c                                                                                                           10 minutes ago      Running             etcd                      0                   0c94b52786972
	
	* 
	* ==> coredns [eac108624350] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20220207192142-8704
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20220207192142-8704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb
	                    minikube.k8s.io/name=addons-20220207192142-8704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_02_07T19_24_44_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20220207192142-8704
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Feb 2022 19:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20220207192142-8704
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Feb 2022 19:35:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Feb 2022 19:31:22 +0000   Mon, 07 Feb 2022 19:24:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Feb 2022 19:31:22 +0000   Mon, 07 Feb 2022 19:24:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Feb 2022 19:31:22 +0000   Mon, 07 Feb 2022 19:24:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Feb 2022 19:31:22 +0000   Mon, 07 Feb 2022 19:24:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20220207192142-8704
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52646744Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52646744Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0d9fc3b84d34ab4ba684459888f0938
	  System UUID:                f0d9fc3b84d34ab4ba684459888f0938
	  Boot ID:                    63de5e8a-b025-4a3e-80b6-1ee5f15fec4d
	  Kernel Version:             5.10.16.3-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.12
	  Kubelet Version:            v1.23.3
	  Kube-Proxy Version:         v1.23.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  gcp-auth                    gcp-auth-59b76855d9-9x4wd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  ingress-nginx               ingress-nginx-controller-cc8496874-t5x5n              100m (0%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         10m
	  kube-system                 coredns-64897985d-gsh99                               100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     10m
	  kube-system                 etcd-addons-20220207192142-8704                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kube-apiserver-addons-20220207192142-8704             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-addons-20220207192142-8704    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-ingress-dns-minikube                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-54gxw                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-addons-20220207192142-8704             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 registry-fcmht                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 registry-proxy-k42vl                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  olm                         catalog-operator-755d759b4b-dvf7z                     10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         9m52s
	  olm                         olm-operator-c755654d4-n6986                          10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         9m52s
	  olm                         operatorhubio-catalog-plfkv                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m43s
	  olm                         packageserver-6cf9698b46-gc67x                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m40s
	  olm                         packageserver-6cf9698b46-rqlch                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                900m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             650Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                                            Age                From        Message
	  ----     ------                                            ----               ----        -------
	  Normal   Starting                                          10m                kube-proxy  
	  Warning  listen tcp4 :30285: bind: address already in use  10m                kube-proxy  can't open port "nodePort for ingress-nginx/ingress-nginx-controller:http" (:30285/tcp4), skipping it
	  Warning  listen tcp4 :30878: bind: address already in use  10m                kube-proxy  can't open port "nodePort for ingress-nginx/ingress-nginx-controller:https" (:30878/tcp4), skipping it
	  Normal   NodeHasSufficientMemory                           10m (x6 over 10m)  kubelet     Node addons-20220207192142-8704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure                             10m (x6 over 10m)  kubelet     Node addons-20220207192142-8704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID                              10m (x6 over 10m)  kubelet     Node addons-20220207192142-8704 status is now: NodeHasSufficientPID
	  Normal   Starting                                          10m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientPID                              10m                kubelet     Node addons-20220207192142-8704 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced                           10m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory                           10m                kubelet     Node addons-20220207192142-8704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure                             10m                kubelet     Node addons-20220207192142-8704 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                                         10m                kubelet     Node addons-20220207192142-8704 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000006] init: (1) ERROR: InitEntryUtilityVm:2425: UpdateTimezone failed
	[  +0.295687] init: (1) ERROR: UpdateTimezone:97: Etc/UTC timezone not found. Is the tzdata package installed?
	[  +0.000005] init: (1) ERROR: InitEntryUtilityVm:2425: UpdateTimezone failed
	[  +3.712644] init: (2) ERROR: UtilCreateProcessAndWait:474: /bin/mount failed with 2
	[  +0.000243] init: (1) ERROR: UtilCreateProcessAndWait:489: /bin/mount failed with status 0x
	[  +0.000003] ff00
	[  +0.000075] init: (1) ERROR: ConfigMountFsTab:2077: Processing fstab with mount -a failed.
	[  +0.001959] init: (3) ERROR: UtilCreateProcessAndWait:474: /bin/mount failed with 2
	[  +0.000181] init: (1) ERROR: UtilCreateProcessAndWait:489: /bin/mount failed with status 0x
	[  +0.000003] ff00
	[  +0.000010] init: (1) ERROR: MountPlan9:478: mount cache=mmap,noatime,trans=fd,rfdno=8,wfdno=8,msize=65536,aname=drvfs;path=C:\;uid=0;gid=0;symlinkroot=/mnt/
	[  +0.000068]  failed 17
	[  +0.001639] init: (4) ERROR: UtilCreateProcessAndWait:474: /bin/mount failed with 2
	[Feb 7 19:16] WSL2: Performing memory compaction.
	[Feb 7 19:18] WSL2: Performing memory compaction.
	[Feb 7 19:19] WSL2: Performing memory compaction.
	[Feb 7 19:20] WSL2: Performing memory compaction.
	[Feb 7 19:22] WSL2: Performing memory compaction.
	[Feb 7 19:23] WSL2: Performing memory compaction.
	[Feb 7 19:28] WSL2: Performing memory compaction.
	[Feb 7 19:30] WSL2: Performing memory compaction.
	[Feb 7 19:31] WSL2: Performing memory compaction.
	[Feb 7 19:32] WSL2: Performing memory compaction.
	[Feb 7 19:33] WSL2: Performing memory compaction.
	[Feb 7 19:35] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [bcad8be8fa1b] <==
	* {"level":"warn","ts":"2022-02-07T19:31:20.879Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"192.2853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/kube-system/csi-hostpath-resizer-shkxl\" ","response":"range_response_count:1 size:1352"}
	{"level":"info","ts":"2022-02-07T19:31:20.879Z","caller":"traceutil/trace.go:171","msg":"trace[1872385205] range","detail":"{range_begin:/registry/endpointslices/kube-system/csi-hostpath-resizer-shkxl; range_end:; response_count:1; response_revision:1936; }","duration":"192.3269ms","start":"2022-02-07T19:31:20.687Z","end":"2022-02-07T19:31:20.879Z","steps":["trace[1872385205] 'agreement among raft nodes before linearized reading'  (duration: 192.2439ms)"],"step_count":1}
	{"level":"info","ts":"2022-02-07T19:31:20.983Z","caller":"traceutil/trace.go:171","msg":"trace[236591512] linearizableReadLoop","detail":"{readStateIndex:2090; appliedIndex:2090; }","duration":"100.3562ms","start":"2022-02-07T19:31:20.883Z","end":"2022-02-07T19:31:20.983Z","steps":["trace[236591512] 'read index received'  (duration: 100.3435ms)","trace[236591512] 'applied index is now lower than readState.Index'  (duration: 10.5µs)"],"step_count":2}
	{"level":"warn","ts":"2022-02-07T19:31:20.995Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.4212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-provisioner-0\" ","response":"range_response_count:1 size:3896"}
	{"level":"info","ts":"2022-02-07T19:31:20.995Z","caller":"traceutil/trace.go:171","msg":"trace[1297672030] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-provisioner-0; range_end:; response_count:1; response_revision:1937; }","duration":"112.5635ms","start":"2022-02-07T19:31:20.883Z","end":"2022-02-07T19:31:20.995Z","steps":["trace[1297672030] 'agreement among raft nodes before linearized reading'  (duration: 100.5441ms)"],"step_count":1}
	{"level":"info","ts":"2022-02-07T19:31:20.997Z","caller":"traceutil/trace.go:171","msg":"trace[1010815632] transaction","detail":"{read_only:false; response_revision:1939; number_of_response:1; }","duration":"109.2336ms","start":"2022-02-07T19:31:20.887Z","end":"2022-02-07T19:31:20.997Z","steps":["trace[1010815632] 'process raft request'  (duration: 109.1664ms)"],"step_count":1}
	{"level":"info","ts":"2022-02-07T19:31:20.997Z","caller":"traceutil/trace.go:171","msg":"trace[283690558] transaction","detail":"{read_only:false; response_revision:1938; number_of_response:1; }","duration":"113.1436ms","start":"2022-02-07T19:31:20.884Z","end":"2022-02-07T19:31:20.997Z","steps":["trace[283690558] 'process raft request'  (duration: 99.5502ms)","trace[283690558] 'compare'  (duration: 12.7926ms)"],"step_count":2}
	{"level":"warn","ts":"2022-02-07T19:31:21.289Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.4112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/csi-hostpath-snapshotter\" ","response":"range_response_count:1 size:1273"}
	{"level":"info","ts":"2022-02-07T19:31:21.289Z","caller":"traceutil/trace.go:171","msg":"trace[1923092369] range","detail":"{range_begin:/registry/services/specs/kube-system/csi-hostpath-snapshotter; range_end:; response_count:1; response_revision:1940; }","duration":"105.5506ms","start":"2022-02-07T19:31:21.183Z","end":"2022-02-07T19:31:21.289Z","steps":["trace[1923092369] 'agreement among raft nodes before linearized reading'  (duration: 95.7431ms)"],"step_count":1}
	{"level":"warn","ts":"2022-02-07T19:31:21.289Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.6594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" ","response":"range_response_count:1 size:3787"}
	{"level":"info","ts":"2022-02-07T19:31:21.289Z","caller":"traceutil/trace.go:171","msg":"trace[1378946618] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-resizer-0; range_end:; response_count:1; response_revision:1940; }","duration":"106.7169ms","start":"2022-02-07T19:31:21.182Z","end":"2022-02-07T19:31:21.289Z","steps":["trace[1378946618] 'agreement among raft nodes before linearized reading'  (duration: 96.7602ms)"],"step_count":1}
	{"level":"warn","ts":"2022-02-07T19:31:21.289Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.7716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/csi-hostpath-resizer-59656d854b\" ","response":"range_response_count:1 size:3378"}
	{"level":"info","ts":"2022-02-07T19:31:21.289Z","caller":"traceutil/trace.go:171","msg":"trace[676132022] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/csi-hostpath-resizer-59656d854b; range_end:; response_count:1; response_revision:1940; }","duration":"106.8148ms","start":"2022-02-07T19:31:21.182Z","end":"2022-02-07T19:31:21.289Z","steps":["trace[676132022] 'agreement among raft nodes before linearized reading'  (duration: 96.7692ms)"],"step_count":1}
	{"level":"warn","ts":"2022-02-07T19:31:33.005Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.4154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-02-07T19:31:33.006Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.7812ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-02-07T19:31:33.006Z","caller":"traceutil/trace.go:171","msg":"trace[1490761706] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1992; }","duration":"116.6159ms","start":"2022-02-07T19:31:32.889Z","end":"2022-02-07T19:31:33.006Z","steps":["trace[1490761706] 'agreement among raft nodes before linearized reading'  (duration: 89.449ms)","trace[1490761706] 'range keys from in-memory index tree'  (duration: 26.9492ms)"],"step_count":2}
	{"level":"info","ts":"2022-02-07T19:31:33.006Z","caller":"traceutil/trace.go:171","msg":"trace[898189169] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotcontents0; response_count:0; response_revision:1992; }","duration":"111.83ms","start":"2022-02-07T19:31:32.894Z","end":"2022-02-07T19:31:33.006Z","steps":["trace[898189169] 'agreement among raft nodes before linearized reading'  (duration: 84.5384ms)","trace[898189169] 'range keys from in-memory index tree'  (duration: 27.2275ms)"],"step_count":2}
	{"level":"warn","ts":"2022-02-07T19:31:33.192Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.3268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-02-07T19:31:33.193Z","caller":"traceutil/trace.go:171","msg":"trace[1607916483] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io; range_end:; response_count:0; response_revision:1996; }","duration":"102.5171ms","start":"2022-02-07T19:31:33.090Z","end":"2022-02-07T19:31:33.193Z","steps":["trace[1607916483] 'agreement among raft nodes before linearized reading'  (duration: 86.99ms)"],"step_count":1}
	{"level":"warn","ts":"2022-02-07T19:31:33.192Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\" ","response":"range_response_count:1 size:36545"}
	{"level":"info","ts":"2022-02-07T19:31:33.193Z","caller":"traceutil/trace.go:171","msg":"trace[887388752] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io; range_end:; response_count:1; response_revision:1996; }","duration":"103.0313ms","start":"2022-02-07T19:31:33.090Z","end":"2022-02-07T19:31:33.193Z","steps":["trace[887388752] 'agreement among raft nodes before linearized reading'  (duration: 87.2469ms)","trace[887388752] 'range keys from in-memory index tree'  (duration: 15.3143ms)"],"step_count":2}
	{"level":"info","ts":"2022-02-07T19:34:36.214Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1550}
	{"level":"info","ts":"2022-02-07T19:34:36.320Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1550,"took":"103.7878ms"}
	{"level":"warn","ts":"2022-02-07T19:35:02.793Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.0911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:metrics-server\" ","response":"range_response_count:1 size:963"}
	{"level":"info","ts":"2022-02-07T19:35:02.794Z","caller":"traceutil/trace.go:171","msg":"trace[1955596551] range","detail":"{range_begin:/registry/clusterroles/system:metrics-server; range_end:; response_count:1; response_revision:2207; }","duration":"100.797ms","start":"2022-02-07T19:35:02.693Z","end":"2022-02-07T19:35:02.794Z","steps":["trace[1955596551] 'agreement among raft nodes before linearized reading'  (duration: 93.6266ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:35:19 up 19 min,  0 users,  load average: 0.48, 1.51, 1.24
	Linux addons-20220207192142-8704 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [26a3ed9e43d5] <==
	* E0207 19:29:39.279687       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.106.208.51:443: connect: connection refused
	W0207 19:29:39.385858       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.106.208.51:443: connect: connection refused
	E0207 19:29:39.386036       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.106.208.51:443: connect: connection refused
	W0207 19:29:53.224268       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.106.208.51:443: connect: connection refused
	E0207 19:29:53.224408       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.106.208.51:443: connect: connection refused
	I0207 19:30:02.134040       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0207 19:30:03.799999       1 alloc.go:329] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.98.254.156]
	I0207 19:30:43.293165       1 trace.go:205] Trace[584902148]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (07-Feb-2022 19:30:42.413) (total time: 879ms):
	Trace[584902148]: ---"Transaction committed" 875ms (19:30:43.293)
	Trace[584902148]: [879.8387ms] [879.8387ms] END
	I0207 19:30:56.599431       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0207 19:31:20.789752       1 trace.go:205] Trace[895063358]: "Delete" url:/api/v1/namespaces/kube-system/pods/csi-hostpath-provisioner-0,user-agent:kube-controller-manager/v1.23.3 (linux/amd64) kubernetes/816c97a/system:serviceaccount:kube-system:generic-garbage-collector,audit-id:fd12f314-d90e-4056-b4b4-d2fbcf28bb53,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Feb-2022 19:31:20.282) (total time: 507ms):
	Trace[895063358]: ---"Object deleted from database" 506ms (19:31:20.789)
	Trace[895063358]: [507.3978ms] [507.3978ms] END
	I0207 19:31:20.790924       1 trace.go:205] Trace[1038279478]: "Delete" url:/apis/apps/v1/namespaces/kube-system/controllerrevisions/csi-hostpath-provisioner-59f54f,user-agent:kube-controller-manager/v1.23.3 (linux/amd64) kubernetes/816c97a/system:serviceaccount:kube-system:generic-garbage-collector,audit-id:66b44184-b343-4eeb-9bff-d9b4bf6aef78,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Feb-2022 19:31:20.282) (total time: 508ms):
	Trace[1038279478]: ---"Object deleted from database" 508ms (19:31:20.790)
	Trace[1038279478]: [508.6946ms] [508.6946ms] END
	I0207 19:31:21.078313       1 trace.go:205] Trace[361443484]: "Delete" url:/api/v1/namespaces/kube-system/services/csi-hostpath-resizer,user-agent:kubectl/v1.23.3 (linux/amd64) kubernetes/816c97a,audit-id:bbb6f354-c03b-4a7a-96a9-04c3ce57a79e,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (07-Feb-2022 19:31:20.185) (total time: 812ms):
	Trace[361443484]: ---"Object deleted from database" 812ms (19:31:20.998)
	Trace[361443484]: [812.5081ms] [812.5081ms] END
	W0207 19:31:33.805371       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0207 19:31:34.092529       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0207 19:31:34.278030       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	E0207 19:35:02.780242       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"metrics-server\" not found]"
	E0207 19:35:02.780341       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"metrics-server\" not found]"
	
	* 
	* ==> kube-controller-manager [d4250f1a910c] <==
	* E0207 19:32:10.132889       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:32:17.828666       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:32:17.828838       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:32:48.782570       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:32:48.782717       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:32:57.080419       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:32:57.080556       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:32:58.733587       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:32:58.733758       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:33:35.099729       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:33:35.099849       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:33:51.077119       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:33:51.077392       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:33:57.939431       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:33:57.939730       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:34:14.396874       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:34:14.397049       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:34:29.845548       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:34:29.845714       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:34:36.977570       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:34:36.977727       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:34:49.358028       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:34:49.358147       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0207 19:35:19.224566       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0207 19:35:19.224685       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [2a41637cba68] <==
	* E0207 19:25:00.980395       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin"
	I0207 19:25:00.994673       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0207 19:25:01.078770       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0207 19:25:01.082229       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0207 19:25:01.085579       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0207 19:25:01.099248       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0207 19:25:01.210013       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0207 19:25:01.210212       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0207 19:25:01.210313       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0207 19:25:01.508963       1 server_others.go:206] "Using iptables Proxier"
	I0207 19:25:01.509140       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0207 19:25:01.509161       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0207 19:25:01.509308       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0207 19:25:01.510792       1 server.go:656] "Version info" version="v1.23.3"
	I0207 19:25:01.512863       1 config.go:226] "Starting endpoint slice config controller"
	I0207 19:25:01.512969       1 config.go:317] "Starting service config controller"
	I0207 19:25:01.512989       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0207 19:25:01.512972       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0207 19:25:01.614233       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0207 19:25:01.614459       1 shared_informer.go:247] Caches are synced for service config 
	E0207 19:25:16.181239       1 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :30285: bind: address already in use" port={Description:nodePort for ingress-nginx/ingress-nginx-controller:http IP: IPFamily:4 Port:30285 Protocol:TCP}
	E0207 19:25:16.181500       1 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :30878: bind: address already in use" port={Description:nodePort for ingress-nginx/ingress-nginx-controller:https IP: IPFamily:4 Port:30878 Protocol:TCP}
	
	* 
	* ==> kube-scheduler [89bf167c795b] <==
	* W0207 19:24:41.034598       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 19:24:41.034703       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0207 19:24:41.041828       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 19:24:41.041944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0207 19:24:41.099032       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0207 19:24:41.099143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0207 19:24:41.207727       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 19:24:41.207843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0207 19:24:41.247823       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 19:24:41.247928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0207 19:24:41.279301       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 19:24:41.279424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0207 19:24:41.356523       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 19:24:41.356624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0207 19:24:41.379116       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0207 19:24:41.379223       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0207 19:24:41.434435       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0207 19:24:41.434609       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0207 19:24:41.478843       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0207 19:24:41.478995       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0207 19:24:41.619675       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 19:24:41.619806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0207 19:24:41.628373       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 19:24:41.628403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0207 19:24:43.594934       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-02-07 19:23:48 UTC, end at Mon 2022-02-07 19:35:19 UTC. --
	Feb 07 19:33:26 addons-20220207192142-8704 kubelet[2007]: E0207 19:33:26.185410    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:33:41 addons-20220207192142-8704 kubelet[2007]: E0207 19:33:41.185289    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:33:55 addons-20220207192142-8704 kubelet[2007]: E0207 19:33:55.188301    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:34:08 addons-20220207192142-8704 kubelet[2007]: E0207 19:34:08.185774    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:34:22 addons-20220207192142-8704 kubelet[2007]: E0207 19:34:22.186433    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:34:37 addons-20220207192142-8704 kubelet[2007]: E0207 19:34:37.035673    2007 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown" image="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 07 19:34:37 addons-20220207192142-8704 kubelet[2007]: E0207 19:34:37.035830    2007 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown" image="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 07 19:34:37 addons-20220207192142-8704 kubelet[2007]: E0207 19:34:37.036223    2007 kuberuntime_manager.go:918] container &Container{Name:registry-server,Image:quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {<nil>} 10m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5vvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePe
riodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod operatorhubio-catalog-plfkv_olm(af19f15a-e636-4000-8938-b0ecd3fb2a71): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be73
8b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown
	Feb 07 19:34:37 addons-20220207192142-8704 kubelet[2007]: E0207 19:34:37.036304    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:34:46 addons-20220207192142-8704 kubelet[2007]: W0207 19:34:46.524385    2007 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Feb 07 19:34:50 addons-20220207192142-8704 kubelet[2007]: E0207 19:34:50.189005    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: E0207 19:35:04.184949    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.400250    2007 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b1f7007-f81a-436b-ab42-07aec87b70e8-tmp-dir\") pod \"6b1f7007-f81a-436b-ab42-07aec87b70e8\" (UID: \"6b1f7007-f81a-436b-ab42-07aec87b70e8\") "
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.400609    2007 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdnk4\" (UniqueName: \"kubernetes.io/projected/6b1f7007-f81a-436b-ab42-07aec87b70e8-kube-api-access-mdnk4\") pod \"6b1f7007-f81a-436b-ab42-07aec87b70e8\" (UID: \"6b1f7007-f81a-436b-ab42-07aec87b70e8\") "
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: W0207 19:35:04.400823    2007 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/6b1f7007-f81a-436b-ab42-07aec87b70e8/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.401518    2007 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1f7007-f81a-436b-ab42-07aec87b70e8-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6b1f7007-f81a-436b-ab42-07aec87b70e8" (UID: "6b1f7007-f81a-436b-ab42-07aec87b70e8"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.405535    2007 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1f7007-f81a-436b-ab42-07aec87b70e8-kube-api-access-mdnk4" (OuterVolumeSpecName: "kube-api-access-mdnk4") pod "6b1f7007-f81a-436b-ab42-07aec87b70e8" (UID: "6b1f7007-f81a-436b-ab42-07aec87b70e8"). InnerVolumeSpecName "kube-api-access-mdnk4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.501818    2007 reconciler.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b1f7007-f81a-436b-ab42-07aec87b70e8-tmp-dir\") on node \"addons-20220207192142-8704\" DevicePath \"\""
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.502020    2007 reconciler.go:300] "Volume detached for volume \"kube-api-access-mdnk4\" (UniqueName: \"kubernetes.io/projected/6b1f7007-f81a-436b-ab42-07aec87b70e8-kube-api-access-mdnk4\") on node \"addons-20220207192142-8704\" DevicePath \"\""
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.526342    2007 scope.go:110] "RemoveContainer" containerID="8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3"
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.591935    2007 scope.go:110] "RemoveContainer" containerID="8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3"
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: E0207 19:35:04.594201    2007 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3" containerID="8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3"
	Feb 07 19:35:04 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:04.594357    2007 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3} err="failed to get container status \"8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3\": rpc error: code = Unknown desc = Error: No such container: 8708ca6d73d3abc7794dc63695ac5d9d376ff5bc998575c7c3ae926cce0c9ff3"
	Feb 07 19:35:06 addons-20220207192142-8704 kubelet[2007]: I0207 19:35:06.199369    2007 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6b1f7007-f81a-436b-ab42-07aec87b70e8 path="/var/lib/kubelet/pods/6b1f7007-f81a-436b-ab42-07aec87b70e8/volumes"
	Feb 07 19:35:15 addons-20220207192142-8704 kubelet[2007]: E0207 19:35:15.186246    2007 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-plfkv" podUID=af19f15a-e636-4000-8938-b0ecd3fb2a71
	
	* 
	* ==> storage-provisioner [bd6ea2628e4f] <==
	* I0207 19:25:20.789955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0207 19:25:21.095611       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0207 19:25:21.096284       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0207 19:25:22.078821       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0207 19:25:22.079184       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20220207192142-8704_93e6614c-fedb-4585-b44a-5dbca64788c8!
	I0207 19:25:22.080332       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7acc33a5-cb2f-44d0-9087-f520a3cd2326", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20220207192142-8704_93e6614c-fedb-4585-b44a-5dbca64788c8 became leader
	I0207 19:25:22.380768       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20220207192142-8704_93e6614c-fedb-4585-b44a-5dbca64788c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-20220207192142-8704 -n addons-20220207192142-8704
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-20220207192142-8704 -n addons-20220207192142-8704: (6.8576504s)
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20220207192142-8704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: ingress-nginx-admission-create-bt4g8 ingress-nginx-admission-patch-vxbvn operatorhubio-catalog-plfkv
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20220207192142-8704 describe pod ingress-nginx-admission-create-bt4g8 ingress-nginx-admission-patch-vxbvn operatorhubio-catalog-plfkv
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20220207192142-8704 describe pod ingress-nginx-admission-create-bt4g8 ingress-nginx-admission-patch-vxbvn operatorhubio-catalog-plfkv: exit status 1 (269.0096ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bt4g8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vxbvn" not found
	Error from server (NotFound): pods "operatorhubio-catalog-plfkv" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20220207192142-8704 describe pod ingress-nginx-admission-create-bt4g8 ingress-nginx-admission-patch-vxbvn operatorhubio-catalog-plfkv: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (329.92s)

                                                
                                    
x
+
TestDockerFlags (533.47s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220207211018-8704 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-20220207211018-8704 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: exit status 80 (7m58.196267s)

                                                
                                                
-- stdout --
	* [docker-flags-20220207211018-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node docker-flags-20220207211018-8704 in cluster docker-flags-20220207211018-8704
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20220207211018-8704" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 21:10:18.444756    1512 out.go:297] Setting OutFile to fd 1504 ...
	I0207 21:10:18.521062    1512 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:10:18.521172    1512 out.go:310] Setting ErrFile to fd 1516...
	I0207 21:10:18.521172    1512 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:10:18.538733    1512 out.go:304] Setting JSON to false
	I0207 21:10:18.540449    1512 start.go:112] hostinfo: {"hostname":"minikube3","uptime":435837,"bootTime":1643832381,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:10:18.540449    1512 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:10:18.542543    1512 out.go:176] * [docker-flags-20220207211018-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:10:18.542543    1512 notify.go:174] Checking for updates...
	I0207 21:10:18.548897    1512 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:10:18.552287    1512 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:10:18.554591    1512 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:10:18.556851    1512 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:10:18.557640    1512 config.go:176] Loaded profile config "force-systemd-env-20220207210133-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:10:18.558560    1512 config.go:176] Loaded profile config "kubernetes-upgrade-20220207210435-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4-rc.0
	I0207 21:10:18.558975    1512 config.go:176] Loaded profile config "missing-upgrade-20220207210209-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0207 21:10:18.559281    1512 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:10:21.662419    1512 docker.go:132] docker version: linux-20.10.12
	I0207 21:10:21.672530    1512 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:10:23.989741    1512 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.3169015s)
	I0207 21:10:23.990793    1512 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:10:22.9365663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:10:23.994242    1512 out.go:176] * Using the docker driver based on user configuration
	I0207 21:10:23.994303    1512 start.go:281] selected driver: docker
	I0207 21:10:23.994360    1512 start.go:798] validating driver "docker" against <nil>
	I0207 21:10:23.994463    1512 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:10:24.087233    1512 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:10:26.688114    1512 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.6008678s)
	I0207 21:10:26.688114    1512 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:10:25.4800545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:10:26.688921    1512 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 21:10:26.689821    1512 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 21:10:26.689821    1512 start_flags.go:826] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0207 21:10:26.689821    1512 cni.go:93] Creating CNI manager for ""
	I0207 21:10:26.689821    1512 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:10:26.689821    1512 start_flags.go:302] config:
	{Name:docker-flags-20220207211018-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:docker-flags-20220207211018-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:10:26.693202    1512 out.go:176] * Starting control plane node docker-flags-20220207211018-8704 in cluster docker-flags-20220207211018-8704
	I0207 21:10:26.693202    1512 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:10:26.696191    1512 out.go:176] * Pulling base image ...
	I0207 21:10:26.696191    1512 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:10:26.696191    1512 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:10:26.696191    1512 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 21:10:26.696191    1512 cache.go:57] Caching tarball of preloaded images
	I0207 21:10:26.696993    1512 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:10:26.696993    1512 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 21:10:26.697522    1512 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\docker-flags-20220207211018-8704\config.json ...
	I0207 21:10:26.697787    1512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\docker-flags-20220207211018-8704\config.json: {Name:mk49670db14f80b640e565fa8b8d2c0031c87975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:10:28.151774    1512 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:10:28.151868    1512 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:10:28.152083    1512 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:10:28.152155    1512 start.go:313] acquiring machines lock for docker-flags-20220207211018-8704: {Name:mk282e02bbffe9a67ba312d1fd681de6e0f577da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:10:28.152155    1512 start.go:317] acquired machines lock for "docker-flags-20220207211018-8704" in 0s
	I0207 21:10:28.152155    1512 start.go:89] Provisioning new machine with config: &{Name:docker-flags-20220207211018-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:docker-flags-2022020721101
8-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:10:28.152155    1512 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:10:28.160928    1512 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:10:28.160928    1512 start.go:160] libmachine.API.Create for "docker-flags-20220207211018-8704" (driver="docker")
	I0207 21:10:28.161459    1512 client.go:168] LocalClient.Create starting
	I0207 21:10:28.161632    1512 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:10:28.161632    1512 main.go:130] libmachine: Decoding PEM data...
	I0207 21:10:28.161632    1512 main.go:130] libmachine: Parsing certificate...
	I0207 21:10:28.162343    1512 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:10:28.162343    1512 main.go:130] libmachine: Decoding PEM data...
	I0207 21:10:28.162343    1512 main.go:130] libmachine: Parsing certificate...
	I0207 21:10:28.170112    1512 cli_runner.go:133] Run: docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:10:29.698354    1512 cli_runner.go:180] docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:10:29.698354    1512 cli_runner.go:186] Completed: docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.5277084s)
	I0207 21:10:29.713441    1512 network_create.go:254] running [docker network inspect docker-flags-20220207211018-8704] to gather additional debugging logs...
	I0207 21:10:29.713441    1512 cli_runner.go:133] Run: docker network inspect docker-flags-20220207211018-8704
	W0207 21:10:31.254479    1512 cli_runner.go:180] docker network inspect docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:10:31.254479    1512 cli_runner.go:186] Completed: docker network inspect docker-flags-20220207211018-8704: (1.5410301s)
	I0207 21:10:31.254479    1512 network_create.go:257] error running [docker network inspect docker-flags-20220207211018-8704]: docker network inspect docker-flags-20220207211018-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220207211018-8704
	I0207 21:10:31.254479    1512 network_create.go:259] output of [docker network inspect docker-flags-20220207211018-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220207211018-8704
	
	** /stderr **
	I0207 21:10:31.264553    1512 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:10:32.780967    1512 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.5162699s)
	I0207 21:10:32.812528    1512 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003120d8] misses:0}
	I0207 21:10:32.812528    1512 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:10:32.812528    1512 network_create.go:106] attempt to create docker network docker-flags-20220207211018-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:10:32.826374    1512 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704
	W0207 21:10:34.393775    1512 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:10:34.393885    1512 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704: (1.5672916s)
	W0207 21:10:34.393997    1512 network_create.go:98] failed to create docker network docker-flags-20220207211018-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:10:34.416812    1512 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003120d8] amended:false}} dirty:map[] misses:0}
	I0207 21:10:34.418052    1512 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:10:34.448321    1512 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003120d8] amended:true}} dirty:map[192.168.49.0:0xc0003120d8 192.168.58.0:0xc00062a9a0] misses:0}
	I0207 21:10:34.448321    1512 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:10:34.448321    1512 network_create.go:106] attempt to create docker network docker-flags-20220207211018-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:10:34.454405    1512 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704
	I0207 21:10:37.688923    1512 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704: (3.2344172s)
	I0207 21:10:37.688964    1512 network_create.go:90] docker network docker-flags-20220207211018-8704 192.168.58.0/24 created
	I0207 21:10:37.689018    1512 kic.go:106] calculated static IP "192.168.58.2" for the "docker-flags-20220207211018-8704" container
	I0207 21:10:37.699578    1512 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:10:39.161816    1512 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.4619909s)
	I0207 21:10:39.167477    1512 cli_runner.go:133] Run: docker volume create docker-flags-20220207211018-8704 --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:10:40.739579    1512 cli_runner.go:186] Completed: docker volume create docker-flags-20220207211018-8704 --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true: (1.5719875s)
	I0207 21:10:40.742176    1512 oci.go:102] Successfully created a docker volume docker-flags-20220207211018-8704
	I0207 21:10:40.750340    1512 cli_runner.go:133] Run: docker run --rm --name docker-flags-20220207211018-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --entrypoint /usr/bin/test -v docker-flags-20220207211018-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:10:53.501411    1512 cli_runner.go:186] Completed: docker run --rm --name docker-flags-20220207211018-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --entrypoint /usr/bin/test -v docker-flags-20220207211018-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (12.7510052s)
	I0207 21:10:53.501411    1512 oci.go:106] Successfully prepared a docker volume docker-flags-20220207211018-8704
	I0207 21:10:53.501686    1512 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:10:53.501781    1512 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:10:53.515990    1512 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220207211018-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:11:31.237265    1512 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220207211018-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (37.7208845s)
	I0207 21:11:31.237346    1512 kic.go:188] duration metric: took 37.735369 seconds to extract preloaded images to volume
	I0207 21:11:31.244909    1512 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:11:33.835907    1512 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.5905393s)
	I0207 21:11:33.836354    1512 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:80 OomKillDisable:true NGoroutines:60 SystemTime:2022-02-07 21:11:32.5705901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:11:33.844475    1512 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:11:36.360705    1512 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.5161398s)
	I0207 21:11:36.367538    1512 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.58.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:11:41.541102    1512 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.58.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:11:41.541173    1512 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.58.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (5.1733428s)
	I0207 21:11:41.541287    1512 client.go:171] LocalClient.Create took 1m13.3794468s
	I0207 21:11:43.557048    1512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:11:43.570883    1512 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704
	W0207 21:11:45.120532    1512 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:11:45.120556    1512 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704: (1.5495727s)
	I0207 21:11:45.120801    1512 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:11:45.415123    1512 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704
	W0207 21:11:46.858545    1512 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:11:46.858545    1512 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704: (1.4432453s)
	W0207 21:11:46.858938    1512 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:11:46.858938    1512 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:11:46.858938    1512 start.go:129] duration metric: createHost completed in 1m18.7063741s
	I0207 21:11:46.858938    1512 start.go:80] releasing machines lock for "docker-flags-20220207211018-8704", held for 1m18.7063741s
	W0207 21:11:46.859220    1512 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.58.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f6
58fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c
	
	stderr:
	docker: Error response from daemon: network docker-flags-20220207211018-8704 not found.
	I0207 21:11:46.876712    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:11:48.365981    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4892099s)
	W0207 21:11:48.384454    1512 start.go:575] delete host: Docker machine "docker-flags-20220207211018-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0207 21:11:48.397465    1512 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.58.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d5
33c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c
	
	stderr:
	docker: Error response from daemon: network docker-flags-20220207211018-8704 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.58.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8
ae15cbe69f5b4a8: exit status 125
	stdout:
	9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c
	
	stderr:
	docker: Error response from daemon: network docker-flags-20220207211018-8704 not found.
	
	I0207 21:11:48.397465    1512 start.go:585] Will try again in 5 seconds ...
	I0207 21:11:53.399413    1512 start.go:313] acquiring machines lock for docker-flags-20220207211018-8704: {Name:mk282e02bbffe9a67ba312d1fd681de6e0f577da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:11:54.110275    1512 start.go:317] acquired machines lock for "docker-flags-20220207211018-8704" in 389.3µs
	I0207 21:11:54.110379    1512 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:11:54.110379    1512 fix.go:55] fixHost starting: 
	I0207 21:11:54.142732    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:11:55.734709    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.591969s)
	I0207 21:11:55.734709    1512 fix.go:108] recreateIfNeeded on docker-flags-20220207211018-8704: state= err=<nil>
	I0207 21:11:55.734709    1512 fix.go:113] machineExists: false. err=machine does not exist
	I0207 21:11:55.775336    1512 out.go:176] * docker "docker-flags-20220207211018-8704" container is missing, will recreate.
	I0207 21:11:55.775569    1512 delete.go:124] DEMOLISHING docker-flags-20220207211018-8704 ...
	I0207 21:11:55.792540    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:11:57.449174    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.6558612s)
	I0207 21:11:57.449316    1512 stop.go:79] host is in state 
	I0207 21:11:57.449457    1512 main.go:130] libmachine: Stopping "docker-flags-20220207211018-8704"...
	I0207 21:11:57.476491    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:11:59.002502    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5254167s)
	I0207 21:11:59.020088    1512 kic_runner.go:93] Run: systemctl --version
	I0207 21:11:59.020088    1512 kic_runner.go:114] Args: [docker exec --privileged docker-flags-20220207211018-8704 systemctl --version]
	I0207 21:12:00.693261    1512 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:12:00.693303    1512 kic_runner.go:114] Args: [docker exec --privileged docker-flags-20220207211018-8704 sudo service kubelet stop]
	I0207 21:12:02.333284    1512 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c is not running
	
	** /stderr **
	W0207 21:12:02.333284    1512 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c is not running
	I0207 21:12:02.346833    1512 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:12:02.346833    1512 kic_runner.go:114] Args: [docker exec --privileged docker-flags-20220207211018-8704 sudo service kubelet stop]
	I0207 21:12:03.991929    1512 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c is not running
	
	** /stderr **
	W0207 21:12:03.991987    1512 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c is not running
	I0207 21:12:04.011281    1512 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 21:12:04.011281    1512 kic_runner.go:114] Args: [docker exec --privileged docker-flags-20220207211018-8704 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 21:12:05.599277    1512 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c is not running
	I0207 21:12:05.599333    1512 kic.go:466] successfully stopped kubernetes!
	I0207 21:12:05.615031    1512 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 21:12:05.615031    1512 kic_runner.go:114] Args: [docker exec --privileged docker-flags-20220207211018-8704 pgrep kube-apiserver]
	I0207 21:12:08.927716    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:10.474276    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5464879s)
	I0207 21:12:13.497695    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:15.268624    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.7709196s)
	I0207 21:12:18.291419    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:19.805708    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5141515s)
	I0207 21:12:22.838488    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:24.484691    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.6456056s)
	I0207 21:12:27.501973    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:29.278555    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.776573s)
	I0207 21:12:32.304895    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:34.419682    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (2.1146488s)
	I0207 21:12:37.454856    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:39.486923    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (2.0316657s)
	I0207 21:12:42.515686    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:44.074317    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5581825s)
	I0207 21:12:47.088124    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:48.468799    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3804476s)
	I0207 21:12:51.478819    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:52.957568    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4785962s)
	I0207 21:12:55.975610    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:12:57.445029    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4694122s)
	I0207 21:13:00.459628    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:01.860081    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4003201s)
	I0207 21:13:04.870915    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:06.390793    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5198696s)
	I0207 21:13:09.403380    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:10.915851    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5121282s)
	I0207 21:13:13.932134    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:15.379491    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4473489s)
	I0207 21:13:18.392227    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:19.782476    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3902417s)
	I0207 21:13:22.793055    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:24.349123    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5560601s)
	I0207 21:13:27.368379    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:29.055065    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.6866776s)
	I0207 21:13:32.069984    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:33.716437    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.6464447s)
	I0207 21:13:36.746451    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:38.199446    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4528959s)
	I0207 21:13:41.214238    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:42.712783    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4985371s)
	I0207 21:13:45.726778    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:47.088820    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3620354s)
	I0207 21:13:50.103563    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:51.640402    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5368305s)
	I0207 21:13:54.652697    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:13:56.141500    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4887951s)
	I0207 21:13:59.153870    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:00.730124    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5762462s)
	I0207 21:14:03.742453    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:05.093305    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3507569s)
	I0207 21:14:08.107741    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:09.564474    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4565336s)
	I0207 21:14:12.575818    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:13.913802    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3379766s)
	I0207 21:14:16.928169    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:18.332853    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4046256s)
	I0207 21:14:21.343758    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:22.765590    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.421825s)
	I0207 21:14:25.779374    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:27.158051    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3785366s)
	I0207 21:14:30.171063    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:31.552916    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.381691s)
	I0207 21:14:34.567331    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:35.973057    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4056347s)
	I0207 21:14:38.986745    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:40.317947    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3311947s)
	I0207 21:14:43.338623    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:44.673871    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3341976s)
	I0207 21:14:47.685789    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:49.119893    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4340969s)
	I0207 21:14:52.133671    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:53.502694    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3690156s)
	I0207 21:14:56.517596    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:14:57.892011    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3744083s)
	I0207 21:15:00.906109    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:02.409864    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5037471s)
	I0207 21:15:05.425040    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:06.867064    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4420171s)
	I0207 21:15:09.882117    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:11.295319    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4131953s)
	I0207 21:15:14.306631    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:15.682803    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3761654s)
	I0207 21:15:18.699347    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:20.080126    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3807721s)
	I0207 21:15:23.093674    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:24.573806    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4799373s)
	I0207 21:15:27.587359    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:29.188700    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.6013328s)
	I0207 21:15:32.209589    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:33.654898    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4451994s)
	I0207 21:15:36.667153    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:38.055554    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3882821s)
	I0207 21:15:41.074734    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:42.482042    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4072271s)
	I0207 21:15:45.493629    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:46.868408    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3746428s)
	I0207 21:15:49.880898    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:51.279701    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3987954s)
	I0207 21:15:54.300376    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:15:55.700493    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3999604s)
	I0207 21:15:58.713053    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:00.235917    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5224202s)
	I0207 21:16:03.247524    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:04.687139    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4394398s)
	I0207 21:16:07.703744    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:09.118556    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4145279s)
	I0207 21:16:12.132853    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:13.592422    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4595612s)
	I0207 21:16:16.605525    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:17.997762    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3922293s)
	I0207 21:16:21.009247    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:22.472984    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4637298s)
	I0207 21:16:25.487762    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:26.981597    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4938279s)
	I0207 21:16:29.992914    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:31.483613    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4906914s)
	I0207 21:16:34.498130    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:35.994478    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.49634s)
	I0207 21:16:38.994829    1512 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0207 21:16:38.994908    1512 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 21:16:39.005910    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:40.476845    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.4699202s)
	W0207 21:16:40.476845    1512 delete.go:135] deletehost failed: Docker machine "docker-flags-20220207211018-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 21:16:40.483796    1512 cli_runner.go:133] Run: docker container inspect -f {{.Id}} docker-flags-20220207211018-8704
	I0207 21:16:41.938317    1512 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} docker-flags-20220207211018-8704: (1.4545141s)
	I0207 21:16:41.944150    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:43.319475    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.3753181s)
	I0207 21:16:43.332488    1512 cli_runner.go:133] Run: docker exec --privileged -t docker-flags-20220207211018-8704 /bin/bash -c "sudo init 0"
	W0207 21:16:45.010284    1512 cli_runner.go:180] docker exec --privileged -t docker-flags-20220207211018-8704 /bin/bash -c "sudo init 0" returned with exit code 1
	I0207 21:16:45.010284    1512 cli_runner.go:186] Completed: docker exec --privileged -t docker-flags-20220207211018-8704 /bin/bash -c "sudo init 0": (1.6777882s)
	I0207 21:16:45.010284    1512 oci.go:659] error shutdown docker-flags-20220207211018-8704: docker exec --privileged -t docker-flags-20220207211018-8704 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9a3f11b766ce8a2b4ba55628bdb790faecdb0e1ad1abb4a72fea4949678eaa4c is not running
	I0207 21:16:46.021594    1512 cli_runner.go:133] Run: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}
	I0207 21:16:47.578132    1512 cli_runner.go:186] Completed: docker container inspect docker-flags-20220207211018-8704 --format={{.State.Status}}: (1.5565306s)
	I0207 21:16:47.578446    1512 oci.go:673] temporary error: container docker-flags-20220207211018-8704 status is  but expect it to be exited
	I0207 21:16:47.578446    1512 oci.go:679] Successfully shutdown container docker-flags-20220207211018-8704
	I0207 21:16:47.582893    1512 cli_runner.go:133] Run: docker rm -f -v docker-flags-20220207211018-8704
	I0207 21:16:49.087462    1512 cli_runner.go:186] Completed: docker rm -f -v docker-flags-20220207211018-8704: (1.5045614s)
	I0207 21:16:49.095472    1512 cli_runner.go:133] Run: docker container inspect -f {{.Id}} docker-flags-20220207211018-8704
	W0207 21:16:50.567482    1512 cli_runner.go:180] docker container inspect -f {{.Id}} docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:16:50.567482    1512 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} docker-flags-20220207211018-8704: (1.4710019s)
	I0207 21:16:50.572500    1512 cli_runner.go:133] Run: docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:16:52.210973    1512 cli_runner.go:180] docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:16:52.211014    1512 cli_runner.go:186] Completed: docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.6382989s)
	I0207 21:16:52.222834    1512 network_create.go:254] running [docker network inspect docker-flags-20220207211018-8704] to gather additional debugging logs...
	I0207 21:16:52.222834    1512 cli_runner.go:133] Run: docker network inspect docker-flags-20220207211018-8704
	W0207 21:16:53.777702    1512 cli_runner.go:180] docker network inspect docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:16:53.777864    1512 cli_runner.go:186] Completed: docker network inspect docker-flags-20220207211018-8704: (1.5547594s)
	I0207 21:16:53.777864    1512 network_create.go:257] error running [docker network inspect docker-flags-20220207211018-8704]: docker network inspect docker-flags-20220207211018-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220207211018-8704
	I0207 21:16:53.777949    1512 network_create.go:259] output of [docker network inspect docker-flags-20220207211018-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220207211018-8704
	
	** /stderr **
	W0207 21:16:53.778602    1512 delete.go:139] delete failed (probably ok) <nil>
	I0207 21:16:53.778602    1512 fix.go:120] Sleeping 1 second for extra luck!
	I0207 21:16:54.779396    1512 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:16:54.785376    1512 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:16:54.785376    1512 start.go:160] libmachine.API.Create for "docker-flags-20220207211018-8704" (driver="docker")
	I0207 21:16:54.785376    1512 client.go:168] LocalClient.Create starting
	I0207 21:16:54.785376    1512 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:16:54.786373    1512 main.go:130] libmachine: Decoding PEM data...
	I0207 21:16:54.786373    1512 main.go:130] libmachine: Parsing certificate...
	I0207 21:16:54.786373    1512 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:16:54.786373    1512 main.go:130] libmachine: Decoding PEM data...
	I0207 21:16:54.786373    1512 main.go:130] libmachine: Parsing certificate...
	I0207 21:16:54.796370    1512 cli_runner.go:133] Run: docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:16:56.513182    1512 cli_runner.go:180] docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:16:56.513182    1512 cli_runner.go:186] Completed: docker network inspect docker-flags-20220207211018-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.7168034s)
	I0207 21:16:56.528194    1512 network_create.go:254] running [docker network inspect docker-flags-20220207211018-8704] to gather additional debugging logs...
	I0207 21:16:56.528194    1512 cli_runner.go:133] Run: docker network inspect docker-flags-20220207211018-8704
	W0207 21:16:58.079069    1512 cli_runner.go:180] docker network inspect docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:16:58.079069    1512 cli_runner.go:186] Completed: docker network inspect docker-flags-20220207211018-8704: (1.5508679s)
	I0207 21:16:58.079069    1512 network_create.go:257] error running [docker network inspect docker-flags-20220207211018-8704]: docker network inspect docker-flags-20220207211018-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220207211018-8704
	I0207 21:16:58.079069    1512 network_create.go:259] output of [docker network inspect docker-flags-20220207211018-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220207211018-8704
	
	** /stderr **
	I0207 21:16:58.084050    1512 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:16:59.595176    1512 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.5110269s)
	I0207 21:16:59.616575    1512 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003120d8] amended:true}} dirty:map[192.168.49.0:0xc0003120d8 192.168.58.0:0xc00062a9a0] misses:0}
	I0207 21:16:59.616575    1512 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:16:59.616575    1512 network_create.go:106] attempt to create docker network docker-flags-20220207211018-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:16:59.623377    1512 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704
	W0207 21:17:01.106813    1512 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:17:01.106813    1512 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704: (1.4833154s)
	W0207 21:17:01.106939    1512 network_create.go:98] failed to create docker network docker-flags-20220207211018-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:17:01.132581    1512 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003120d8] amended:true}} dirty:map[192.168.49.0:0xc0003120d8 192.168.58.0:0xc00062a9a0] misses:0}
	I0207 21:17:01.132581    1512 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:17:01.154005    1512 network.go:284] reusing subnet 192.168.58.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003120d8] amended:true}} dirty:map[192.168.49.0:0xc0003120d8 192.168.58.0:0xc00062a9a0] misses:1}
	I0207 21:17:01.154005    1512 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:17:01.154986    1512 network_create.go:106] attempt to create docker network docker-flags-20220207211018-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:17:01.160985    1512 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704
	W0207 21:17:02.570590    1512 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:17:02.570942    1512 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704: (1.4095983s)
	W0207 21:17:02.571042    1512 network_create.go:98] failed to create docker network docker-flags-20220207211018-8704 192.168.58.0/24, will retry: subnet is taken
	I0207 21:17:02.589586    1512 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003120d8 192.168.58.0:0xc00062a9a0] amended:false}} dirty:map[] misses:0}
	I0207 21:17:02.589586    1512 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:17:02.607586    1512 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003120d8 192.168.58.0:0xc00062a9a0] amended:true}} dirty:map[192.168.49.0:0xc0003120d8 192.168.58.0:0xc00062a9a0 192.168.67.0:0xc000312308] misses:0}
	I0207 21:17:02.607586    1512 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:17:02.607586    1512 network_create.go:106] attempt to create docker network docker-flags-20220207211018-8704 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0207 21:17:02.613580    1512 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704
	I0207 21:17:04.959425    1512 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220207211018-8704: (2.3458326s)
	I0207 21:17:04.959425    1512 network_create.go:90] docker network docker-flags-20220207211018-8704 192.168.67.0/24 created
	I0207 21:17:04.959425    1512 kic.go:106] calculated static IP "192.168.67.2" for the "docker-flags-20220207211018-8704" container
	I0207 21:17:04.973393    1512 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:17:06.605470    1512 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.6320692s)
	I0207 21:17:06.610337    1512 cli_runner.go:133] Run: docker volume create docker-flags-20220207211018-8704 --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:17:08.139423    1512 cli_runner.go:186] Completed: docker volume create docker-flags-20220207211018-8704 --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true: (1.529078s)
	I0207 21:17:08.139423    1512 oci.go:102] Successfully created a docker volume docker-flags-20220207211018-8704
	I0207 21:17:08.147447    1512 cli_runner.go:133] Run: docker run --rm --name docker-flags-20220207211018-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --entrypoint /usr/bin/test -v docker-flags-20220207211018-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:17:11.946572    1512 cli_runner.go:186] Completed: docker run --rm --name docker-flags-20220207211018-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --entrypoint /usr/bin/test -v docker-flags-20220207211018-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (3.7991053s)
	I0207 21:17:11.946572    1512 oci.go:106] Successfully prepared a docker volume docker-flags-20220207211018-8704
	I0207 21:17:11.946572    1512 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:17:11.946572    1512 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:17:11.953559    1512 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220207211018-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:18:00.759215    1512 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220207211018-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (48.8053545s)
	I0207 21:18:00.759377    1512 kic.go:188] duration metric: took 48.812553 seconds to extract preloaded images to volume
	I0207 21:18:00.770326    1512 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:18:03.327699    1512 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.557209s)
	I0207 21:18:03.327778    1512 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:18:02.159583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64
IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_b
ps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:18:03.336581    1512 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:18:05.895816    1512 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.5592211s)
	I0207 21:18:05.905049    1512 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.67.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:18:08.373210    1512 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.67.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:18:08.373364    1512 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.67.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (2.4681484s)
	I0207 21:18:08.373364    1512 client.go:171] LocalClient.Create took 1m13.5876089s
	I0207 21:18:10.380916    1512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:18:10.385997    1512 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704
	W0207 21:18:11.787541    1512 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:18:11.787541    1512 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704: (1.4015365s)
	I0207 21:18:11.787541    1512 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:18:12.087333    1512 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704
	W0207 21:18:13.431448    1512 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:18:13.431448    1512 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704: (1.3441081s)
	W0207 21:18:13.431448    1512 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:18:13.431448    1512 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:18:13.431448    1512 start.go:129] duration metric: createHost completed in 1m18.651647s
	I0207 21:18:13.439451    1512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:18:13.445445    1512 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704
	W0207 21:18:14.836118    1512 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:18:14.836118    1512 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704: (1.3906654s)
	I0207 21:18:14.836118    1512 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:18:15.072777    1512 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704
	W0207 21:18:16.388906    1512 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704 returned with exit code 1
	I0207 21:18:16.388906    1512 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220207211018-8704: (1.3161222s)
	W0207 21:18:16.388906    1512 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:18:16.388906    1512 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:18:16.388906    1512 fix.go:57] fixHost completed within 6m22.2765553s
	I0207 21:18:16.388906    1512 start.go:80] releasing machines lock for "docker-flags-20220207211018-8704", held for 6m22.2766226s
	W0207 21:18:16.389682    1512 out.go:241] * Failed to start docker container. Running "minikube delete -p docker-flags-20220207211018-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.67.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::
32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	b9d05dd5eef50db5a332c864f1649e516fe23c27ec70d75ad328b8affb6d6288
	
	stderr:
	docker: Error response from daemon: network docker-flags-20220207211018-8704 not found.
	
	* Failed to start docker container. Running "minikube delete -p docker-flags-20220207211018-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.67.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v
0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	b9d05dd5eef50db5a332c864f1649e516fe23c27ec70d75ad328b8affb6d6288
	
	stderr:
	docker: Error response from daemon: network docker-flags-20220207211018-8704 not found.
	
	I0207 21:18:16.394720    1512 out.go:176] 
	W0207 21:18:16.394720    1512 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.67.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-16438
23806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	b9d05dd5eef50db5a332c864f1649e516fe23c27ec70d75ad328b8affb6d6288
	
	stderr:
	docker: Error response from daemon: network docker-flags-20220207211018-8704 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220207211018-8704 --name docker-flags-20220207211018-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220207211018-8704 --network docker-flags-20220207211018-8704 --ip 192.168.67.2 --volume docker-flags-20220207211018-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f
658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	b9d05dd5eef50db5a332c864f1649e516fe23c27ec70d75ad328b8affb6d6288
	
	stderr:
	docker: Error response from daemon: network docker-flags-20220207211018-8704 not found.
	
	W0207 21:18:16.394720    1512 out.go:241] * 
	* 
	W0207 21:18:16.396062    1512 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 21:18:16.398807    1512 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:48: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-20220207211018-8704 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker" : exit status 80
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220207211018-8704 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220207211018-8704 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 85 (3.1366019s)

                                                
                                                
-- stdout --
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p docker-flags-20220207211018-8704"

                                                
                                                
-- /stdout --
docker_test.go:53: failed to 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220207211018-8704 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 85
docker_test.go:58: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node \"\" does not exist.\n  To start a cluster, run: \"minikube start -p docker-flags-20220207211018-8704\"\n"*.
docker_test.go:58: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node \"\" does not exist.\n  To start a cluster, run: \"minikube start -p docker-flags-20220207211018-8704\"\n"*.
docker_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220207211018-8704 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:62: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220207211018-8704 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 85 (3.19019s)

                                                
                                                
-- stdout --
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p docker-flags-20220207211018-8704"

                                                
                                                
-- /stdout --
docker_test.go:64: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220207211018-8704 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 85
docker_test.go:68: expected "out/minikube-windows-amd64.exe -p docker-flags-20220207211018-8704 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node \"\" does not exist.\n  To start a cluster, run: \"minikube start -p docker-flags-20220207211018-8704\"\n"
panic.go:642: *** TestDockerFlags FAILED at 2022-02-07 21:18:22.8747763 +0000 GMT m=+7152.404340301
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect docker-flags-20220207211018-8704
helpers_test.go:232: (dbg) Done: docker inspect docker-flags-20220207211018-8704: (1.2859974s)
helpers_test.go:236: (dbg) docker inspect docker-flags-20220207211018-8704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9d05dd5eef50db5a332c864f1649e516fe23c27ec70d75ad328b8affb6d6288",
	        "Created": "2022-02-07T21:18:07.4557229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network docker-flags-20220207211018-8704 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/docker-flags-20220207211018-8704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "docker-flags-20220207211018-8704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "docker-flags-20220207211018-8704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d9bee6f8fa0b4c80275681433f094683f78d28fd9bbce3a2640ff3f0e61140cb-init/diff:/var/lib/docker/overlay2/75e1ee3c034aacc956b8c3ecc7ab61ac5c38e660082589ceb37efd240a771cc5/diff:/var/lib/docker/overlay2/189fe0ac50cbe021b1f58d4d3552848c814165ab41c880cc414a3d772ecf8a17/diff:/var/lib/docker/overlay2/1825c50829a945a491708c366e3adc3e6d891ec2fcbd7f13b41f06c64baa55d9/diff:/var/lib/docker/overlay2/0b9358d8d7de1369e9019714824c8f1007b6c08b3ebf296b7b1288610816a2ce/diff:/var/lib/docker/overlay2/689f6514ad269d91cd1861629d1949b077031e825417ef4dfb5621888699407b/diff:/var/lib/docker/overlay2/8dff862a1c6a46807e22567df5955e49a8aa3d0a1f2ad45ca46f2ab5374556fe/diff:/var/lib/docker/overlay2/ee466d69c85d056ef8068fd5652d1a05e5ca08f4f2880d8156cd2f212ceaaaa6/diff:/var/lib/docker/overlay2/86890d1d8e6826b123ee9ec4c463f6f91ad837f07b7147e0c6ef8c7e17b601da/diff:/var/lib/docker/overlay2/b657d041c7bdb28ab2fd58a8e3615ec574e7e5fcace80e88f630332a1ff67ff7/diff:/var/lib/docker/overlay2/4339b0
c7baf085cb3dc647fb19cd967f89fdd4e316e2bc806815c81fc17efc59/diff:/var/lib/docker/overlay2/36993c24ec6e3eb908331a1c00b702e3326415b7124d4d1788747ba328eb6e2a/diff:/var/lib/docker/overlay2/5b68d569c7973aeabb60b4d744a1b86cc3ebb8b284e55bbbe33576e97e3ac021/diff:/var/lib/docker/overlay2/57b6ab85187eac783753b7bdcafb75e9d26d3e9d22b614bbfa42fbf4a6e879f8/diff:/var/lib/docker/overlay2/e5f2f9b80a695305ffbe047f65db35cc276ac41f987ec84a5742b3769918cb79/diff:/var/lib/docker/overlay2/06d7d08e9ebfbe3202537757cc03ccaa87b749e7dd8354ae1978c44a1b14a690/diff:/var/lib/docker/overlay2/44604b9a5d1c918e1d3ebe374cc5b01af83b10aef4cbf54e72d7fd0b7be60646/diff:/var/lib/docker/overlay2/9d28038d0516655f0a12f3ec5220089de0a54540a27220e4f412dd3acc577f9b/diff:/var/lib/docker/overlay2/ec704366d20c2f84ce0d53c1b278507dc9cc66331cba15d90521a96d118d45af/diff:/var/lib/docker/overlay2/32b5b8eb800bf64445a63842604512878f22712d00a869b2104a1b528d6e8010/diff:/var/lib/docker/overlay2/6ff5152a44a5b0fd36c63aa1c7199ff420477113981a2dd750c29f82e1509669/diff:/var/lib/d
ocker/overlay2/b42f3edd75dd995daac9924998fafd7fe1b919f222b8185a3dfeef9a762660c7/diff:/var/lib/docker/overlay2/3cd19c2de3ea2cc271124c2c82db46bf5f550625dd02a5cde5c517af93c73caa/diff:/var/lib/docker/overlay2/b41830a6d20150650c5fb37bb60e7c06147734911fda7300a739cd023bb4789a/diff:/var/lib/docker/overlay2/925bf7a180aeb21aee1f13bf31ccc1f05a642fd383aabb499148885dcac5cfeb/diff:/var/lib/docker/overlay2/a5ec93ff5dc3e9d4a9975d8f1176019d102f9e8c319a4d5016f842be26bb5671/diff:/var/lib/docker/overlay2/37e01c18dc12ba0b9bd89093b244ef29456df1fb30fc4a8c3e5596b7b56ada0a/diff:/var/lib/docker/overlay2/6ce0b6587d0750a0ef5383637b91df31d4c1619e3a494b84c8714c5beebf1dbc/diff:/var/lib/docker/overlay2/8f4e875a02344a4926d7f5ad052151ca0eef0364a189b7ca60ebb338213d7c8e/diff:/var/lib/docker/overlay2/2790936ada4be199505c2cab1447b90a25076c4d2cbceadeb4a52026c71b9c60/diff:/var/lib/docker/overlay2/231fcc4021464c7f510cca7eecaabc94216fcc70cb62f97465c0d546064b25b8/diff:/var/lib/docker/overlay2/30845ecf75e8fd0fa04703004fc686bb8aff8eabe9437f4e7a1096a5bca
060a3/diff:/var/lib/docker/overlay2/3ae1acee47e31df704424e5e9dbaed72199c1cb3a318825a84cc9d2f08f1d807/diff:/var/lib/docker/overlay2/f9fe697b5ffab06c3cc31c3e2b7d924c32d4f0f4ee8fd29cb5e2b46e586b4d4d/diff:/var/lib/docker/overlay2/68afa844b9fe835f1997b14fe394dac6238ee6a39aa0abfc34a93c062d58f819/diff:/var/lib/docker/overlay2/94b84dda68e5a3dbf4319437e5d026f2c5c705496ca2d9922f7e865879146b56/diff:/var/lib/docker/overlay2/f133dd3fe2bf48f8bd9dced36254f4cc973685d2ddde9ee6e0f2467ea7d34592/diff:/var/lib/docker/overlay2/dafd5505dd817285a71ea03b36fb5684a0c844441c07c909d1e6c47b874b33d4/diff:/var/lib/docker/overlay2/c714cab2096f6325d72b4b73673c329c5db40f169c0d6d5d034bf8af87b90983/diff:/var/lib/docker/overlay2/ea71191eaaa01123105da39dc897cb6e11c028c8a2e91dc62ff85bb5e0fb1884/diff:/var/lib/docker/overlay2/6c554fb0a2463d3ef05cdb7858f9788626b5c72dbb4ea5a0431ec665de90dc74/diff:/var/lib/docker/overlay2/01e92d0b67f2be5d7d6ba3f84ffac8ad1e0c516b03b45346070503f62de32e5a/diff:/var/lib/docker/overlay2/f5f6f40c4df999e1ae2e5733fa6aad1cf8963e
bd6e2b9f849164ca5c149a4262/diff:/var/lib/docker/overlay2/e1eb2f89916ebfdb9a8d5aacfd9618edc370a018de0114d193b6069979c02aa7/diff:/var/lib/docker/overlay2/0e35d26329f1b7cf4e1b2bb03588192d3ea37764eab1ccc5a598db2164c932d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9bee6f8fa0b4c80275681433f094683f78d28fd9bbce3a2640ff3f0e61140cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9bee6f8fa0b4c80275681433f094683f78d28fd9bbce3a2640ff3f0e61140cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9bee6f8fa0b4c80275681433f094683f78d28fd9bbce3a2640ff3f0e61140cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "docker-flags-20220207211018-8704",
	                "Source": "/var/lib/docker/volumes/docker-flags-20220207211018-8704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "docker-flags-20220207211018-8704",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "docker-flags-20220207211018-8704",
	                "name.minikube.sigs.k8s.io": "docker-flags-20220207211018-8704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "docker-flags-20220207211018-8704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220207211018-8704 -n docker-flags-20220207211018-8704
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220207211018-8704 -n docker-flags-20220207211018-8704: exit status 7 (3.067126s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "docker-flags-20220207211018-8704" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:176: Cleaning up "docker-flags-20220207211018-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220207211018-8704

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220207211018-8704: (44.4628473s)
--- FAIL: TestDockerFlags (533.47s)

                                                
                                    
x
+
TestForceSystemdEnv (566.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220207210133-8704 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-20220207210133-8704 --memory=2048 --alsologtostderr -v=5 --driver=docker: exit status 80 (8m47.5982477s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20220207210133-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Starting control plane node force-systemd-env-20220207210133-8704 in cluster force-systemd-env-20220207210133-8704
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20220207210133-8704" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 21:01:33.643093   10352 out.go:297] Setting OutFile to fd 1440 ...
	I0207 21:01:33.710120   10352 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:01:33.710120   10352 out.go:310] Setting ErrFile to fd 1728...
	I0207 21:01:33.710120   10352 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:01:33.726145   10352 out.go:304] Setting JSON to false
	I0207 21:01:33.729403   10352 start.go:112] hostinfo: {"hostname":"minikube3","uptime":435312,"bootTime":1643832381,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:01:33.729403   10352 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:01:33.735060   10352 out.go:176] * [force-systemd-env-20220207210133-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:01:33.735407   10352 notify.go:174] Checking for updates...
	I0207 21:01:33.738701   10352 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:01:33.741688   10352 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:01:33.744361   10352 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:01:33.746337   10352 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:01:33.748311   10352 out.go:176]   - MINIKUBE_FORCE_SYSTEMD=true
	I0207 21:01:33.750374   10352 config.go:176] Loaded profile config "NoKubernetes-20220207205647-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0207 21:01:33.750374   10352 config.go:176] Loaded profile config "cert-expiration-20220207205647-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:01:33.751410   10352 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:01:36.559092   10352 docker.go:132] docker version: linux-20.10.12
	I0207 21:01:36.565608   10352 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:01:38.721393   10352 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.1557737s)
	I0207 21:01:38.721393   10352 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:01:37.7062521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:01:38.729370   10352 out.go:176] * Using the docker driver based on user configuration
	I0207 21:01:38.729370   10352 start.go:281] selected driver: docker
	I0207 21:01:38.729370   10352 start.go:798] validating driver "docker" against <nil>
	I0207 21:01:38.729370   10352 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:01:38.809871   10352 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:01:45.558084   10352 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (6.7481783s)
	I0207 21:01:45.558245   10352 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:54 SystemTime:2022-02-07 21:01:40.0089093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:01:45.558245   10352 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 21:01:45.559724   10352 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 21:01:45.559808   10352 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0207 21:01:45.560081   10352 cni.go:93] Creating CNI manager for ""
	I0207 21:01:45.560081   10352 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:01:45.560081   10352 start_flags.go:302] config:
	{Name:force-systemd-env-20220207210133-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:force-systemd-env-20220207210133-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:01:45.564797   10352 out.go:176] * Starting control plane node force-systemd-env-20220207210133-8704 in cluster force-systemd-env-20220207210133-8704
	I0207 21:01:45.564797   10352 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:01:45.569785   10352 out.go:176] * Pulling base image ...
	I0207 21:01:45.569785   10352 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:01:45.569785   10352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:01:45.569785   10352 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 21:01:45.569785   10352 cache.go:57] Caching tarball of preloaded images
	I0207 21:01:45.570559   10352 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:01:45.570559   10352 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 21:01:45.571244   10352 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\force-systemd-env-20220207210133-8704\config.json ...
	I0207 21:01:45.571521   10352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\force-systemd-env-20220207210133-8704\config.json: {Name:mk1cbf04d78d67d649366269728d7a93ddb67779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:01:46.890074   10352 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:01:46.890149   10352 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:01:46.890187   10352 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:01:46.890187   10352 start.go:313] acquiring machines lock for force-systemd-env-20220207210133-8704: {Name:mk83f98bd0e5a848ced5d2db4bc887500325807a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:01:46.890187   10352 start.go:317] acquired machines lock for "force-systemd-env-20220207210133-8704" in 0s
	I0207 21:01:46.890722   10352 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20220207210133-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:force-systemd-env-20220207210133-8704 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Na
me: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:01:46.890947   10352 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:01:46.897885   10352 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:01:46.898620   10352 start.go:160] libmachine.API.Create for "force-systemd-env-20220207210133-8704" (driver="docker")
	I0207 21:01:46.898834   10352 client.go:168] LocalClient.Create starting
	I0207 21:01:46.899367   10352 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:01:46.899708   10352 main.go:130] libmachine: Decoding PEM data...
	I0207 21:01:46.899708   10352 main.go:130] libmachine: Parsing certificate...
	I0207 21:01:46.899708   10352 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:01:46.900393   10352 main.go:130] libmachine: Decoding PEM data...
	I0207 21:01:46.900393   10352 main.go:130] libmachine: Parsing certificate...
	I0207 21:01:46.909267   10352 cli_runner.go:133] Run: docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:01:49.129011   10352 cli_runner.go:180] docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:01:49.129124   10352 cli_runner.go:186] Completed: docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (2.2196831s)
	I0207 21:01:49.136031   10352 network_create.go:254] running [docker network inspect force-systemd-env-20220207210133-8704] to gather additional debugging logs...
	I0207 21:01:49.136031   10352 cli_runner.go:133] Run: docker network inspect force-systemd-env-20220207210133-8704
	W0207 21:01:50.562966   10352 cli_runner.go:180] docker network inspect force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:01:50.562966   10352 cli_runner.go:186] Completed: docker network inspect force-systemd-env-20220207210133-8704: (1.4269272s)
	I0207 21:01:50.562966   10352 network_create.go:257] error running [docker network inspect force-systemd-env-20220207210133-8704]: docker network inspect force-systemd-env-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220207210133-8704
	I0207 21:01:50.562966   10352 network_create.go:259] output of [docker network inspect force-systemd-env-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220207210133-8704
	
	** /stderr **
	I0207 21:01:50.570960   10352 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:01:52.037053   10352 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.4660859s)
	I0207 21:01:52.064056   10352 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e2a8] misses:0}
	I0207 21:01:52.064056   10352 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:01:52.064056   10352 network_create.go:106] attempt to create docker network force-systemd-env-20220207210133-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:01:52.072049   10352 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704
	W0207 21:01:53.540623   10352 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:01:54.011618   10352 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704: (1.4685661s)
	W0207 21:01:54.011618   10352 network_create.go:98] failed to create docker network force-systemd-env-20220207210133-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:01:54.038604   10352 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e2a8] amended:false}} dirty:map[] misses:0}
	I0207 21:01:54.038604   10352 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:01:54.060637   10352 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e2a8] amended:true}} dirty:map[192.168.49.0:0xc00014e2a8 192.168.58.0:0xc0006862c8] misses:0}
	I0207 21:01:54.060637   10352 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:01:54.060637   10352 network_create.go:106] attempt to create docker network force-systemd-env-20220207210133-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:01:54.069605   10352 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704
	I0207 21:01:56.474383   10352 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704: (2.404766s)
	I0207 21:01:56.474383   10352 network_create.go:90] docker network force-systemd-env-20220207210133-8704 192.168.58.0/24 created
	I0207 21:01:56.474383   10352 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20220207210133-8704" container
	I0207 21:01:56.488387   10352 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:01:57.941027   10352 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.4526329s)
	I0207 21:01:57.947016   10352 cli_runner.go:133] Run: docker volume create force-systemd-env-20220207210133-8704 --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:01:59.361225   10352 cli_runner.go:186] Completed: docker volume create force-systemd-env-20220207210133-8704 --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true: (1.4140659s)
	I0207 21:01:59.361225   10352 oci.go:102] Successfully created a docker volume force-systemd-env-20220207210133-8704
	I0207 21:01:59.368050   10352 cli_runner.go:133] Run: docker run --rm --name force-systemd-env-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --entrypoint /usr/bin/test -v force-systemd-env-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:02:06.817746   10352 cli_runner.go:186] Completed: docker run --rm --name force-systemd-env-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --entrypoint /usr/bin/test -v force-systemd-env-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (7.4495034s)
	I0207 21:02:06.817829   10352 oci.go:106] Successfully prepared a docker volume force-systemd-env-20220207210133-8704
	I0207 21:02:06.817829   10352 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:02:06.817829   10352 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:02:06.824629   10352 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:03:07.701897   10352 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (1m0.8769577s)
	I0207 21:03:07.702167   10352 kic.go:188] duration metric: took 60.884028 seconds to extract preloaded images to volume
	I0207 21:03:07.710128   10352 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:03:10.066346   10352 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.3562065s)
	I0207 21:03:10.066346   10352 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:62 OomKillDisable:true NGoroutines:58 SystemTime:2022-02-07 21:03:08.9631086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:03:10.073693   10352 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:03:12.382990   10352 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.309131s)
	I0207 21:03:12.388837   10352 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:03:14.574360   10352 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with
exit code 125
	I0207 21:03:14.574360   10352 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (
2.185512s)
	I0207 21:03:14.574360   10352 client.go:171] LocalClient.Create took 1m27.6750794s
	I0207 21:03:16.584918   10352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:03:16.591192   10352 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704
	W0207 21:03:17.881953   10352 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:03:17.881953   10352 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704: (1.290754s)
	I0207 21:03:17.882383   10352 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:03:18.165611   10352 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704
	W0207 21:03:19.486047   10352 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:03:19.486047   10352 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704: (1.3204287s)
	W0207 21:03:19.486047   10352 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:03:19.486047   10352 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:03:19.486047   10352 start.go:129] duration metric: createHost completed in 1m32.5946275s
	I0207 21:03:19.486047   10352 start.go:80] releasing machines lock for "force-systemd-env-20220207210133-8704", held for 1m32.5953874s
	W0207 21:03:19.486047   10352 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@s
ha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2
	
	stderr:
	docker: Error response from daemon: network force-systemd-env-20220207210133-8704 not found.
	I0207 21:03:19.496963   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:20.781572   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.284603s)
	W0207 21:03:20.781902   10352 start.go:575] delete host: Docker machine "force-systemd-env-20220207210133-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0207 21:03:20.782008   10352 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1
643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2
	
	stderr:
	docker: Error response from daemon: network force-systemd-env-20220207210133-8704 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d893
6c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2
	
	stderr:
	docker: Error response from daemon: network force-systemd-env-20220207210133-8704 not found.
	
	I0207 21:03:20.782008   10352 start.go:585] Will try again in 5 seconds ...
	I0207 21:03:25.783516   10352 start.go:313] acquiring machines lock for force-systemd-env-20220207210133-8704: {Name:mk83f98bd0e5a848ced5d2db4bc887500325807a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:03:25.783516   10352 start.go:317] acquired machines lock for "force-systemd-env-20220207210133-8704" in 0s
	I0207 21:03:25.783516   10352 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:03:25.783516   10352 fix.go:55] fixHost starting: 
	I0207 21:03:25.797499   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:27.073404   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.2757671s)
	I0207 21:03:27.073404   10352 fix.go:108] recreateIfNeeded on force-systemd-env-20220207210133-8704: state= err=<nil>
	I0207 21:03:27.073404   10352 fix.go:113] machineExists: false. err=machine does not exist
	I0207 21:03:27.077893   10352 out.go:176] * docker "force-systemd-env-20220207210133-8704" container is missing, will recreate.
	I0207 21:03:27.077970   10352 delete.go:124] DEMOLISHING force-systemd-env-20220207210133-8704 ...
	I0207 21:03:27.092076   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:28.520927   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4288443s)
	I0207 21:03:28.520927   10352 stop.go:79] host is in state 
	I0207 21:03:28.520927   10352 main.go:130] libmachine: Stopping "force-systemd-env-20220207210133-8704"...
	I0207 21:03:28.533365   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:29.912717   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3792994s)
	I0207 21:03:29.926633   10352 kic_runner.go:93] Run: systemctl --version
	I0207 21:03:29.926633   10352 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-20220207210133-8704 systemctl --version]
	I0207 21:03:31.430274   10352 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:03:31.430274   10352 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-20220207210133-8704 sudo service kubelet stop]
	I0207 21:03:32.893654   10352 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2 is not running
	
	** /stderr **
	W0207 21:03:32.893654   10352 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2 is not running
	I0207 21:03:32.905633   10352 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:03:32.905633   10352 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-20220207210133-8704 sudo service kubelet stop]
	I0207 21:03:34.345890   10352 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2 is not running
	
	** /stderr **
	W0207 21:03:34.345890   10352 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2 is not running
	I0207 21:03:34.357661   10352 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 21:03:34.357661   10352 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-20220207210133-8704 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 21:03:35.820895   10352 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2 is not running
	I0207 21:03:35.821000   10352 kic.go:466] successfully stopped kubernetes!
	I0207 21:03:35.838006   10352 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 21:03:35.838006   10352 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-20220207210133-8704 pgrep kube-apiserver]
	I0207 21:03:38.786510   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:40.128207   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.341582s)
	I0207 21:03:43.139489   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:44.536664   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3970186s)
	I0207 21:03:47.555927   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:49.056080   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.5000794s)
	I0207 21:03:52.067287   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:53.372378   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3045334s)
	I0207 21:03:56.384485   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:03:57.708557   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3239024s)
	I0207 21:04:00.720839   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:02.106035   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3851885s)
	I0207 21:04:05.118427   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:06.454916   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.336445s)
	I0207 21:04:09.466507   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:10.798277   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3307669s)
	I0207 21:04:13.810943   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:15.098578   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.2875028s)
	I0207 21:04:18.108996   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:19.419561   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3105582s)
	I0207 21:04:22.435424   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:23.795981   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3605s)
	I0207 21:04:26.809285   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:28.147164   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3376403s)
	I0207 21:04:31.164562   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:32.521018   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3564495s)
	I0207 21:04:35.532718   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:36.857229   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3239249s)
	I0207 21:04:39.870750   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:41.273530   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4026515s)
	I0207 21:04:44.283894   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:45.645819   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3619178s)
	I0207 21:04:48.659394   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:50.027170   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3677687s)
	I0207 21:04:53.043812   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:54.443593   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3995583s)
	I0207 21:04:57.460604   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:04:58.905792   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4450041s)
	I0207 21:05:01.927264   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:03.325337   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3980663s)
	I0207 21:05:06.341429   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:07.705308   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3637534s)
	I0207 21:05:10.720274   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:12.229336   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.5088771s)
	I0207 21:05:15.245659   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:16.567906   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3222403s)
	I0207 21:05:19.581713   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:20.960904   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3791838s)
	I0207 21:05:23.975511   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:25.356759   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3810133s)
	I0207 21:05:28.371662   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:29.737076   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3653586s)
	I0207 21:05:32.749013   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:34.072374   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3233542s)
	I0207 21:05:37.086152   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:38.519183   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4329578s)
	I0207 21:05:41.533724   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:42.893768   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3599881s)
	I0207 21:05:45.907971   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:47.345141   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4371625s)
	I0207 21:05:50.355467   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:51.775222   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.419748s)
	I0207 21:05:54.800654   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:05:56.553202   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.751511s)
	I0207 21:05:59.566597   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:01.052551   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4857873s)
	I0207 21:06:04.071106   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:05.425515   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3542165s)
	I0207 21:06:08.442041   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:09.869464   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4274161s)
	I0207 21:06:12.884461   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:14.344321   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4598525s)
	I0207 21:06:17.355241   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:18.748539   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3931587s)
	I0207 21:06:21.758913   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:23.325312   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.5663907s)
	I0207 21:06:26.342961   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:27.976988   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.6340187s)
	I0207 21:06:30.994544   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:32.467144   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4725925s)
	I0207 21:06:35.481474   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:36.930381   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4488994s)
	I0207 21:06:39.948281   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:41.466304   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.518016s)
	I0207 21:06:44.482367   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:46.001081   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.518706s)
	I0207 21:06:49.019805   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:50.480784   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4604738s)
	I0207 21:06:53.497251   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:54.880000   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3827413s)
	I0207 21:06:57.892707   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:06:59.415314   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.522529s)
	I0207 21:07:02.428803   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:03.801311   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3725002s)
	I0207 21:07:06.816703   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:08.335982   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.5192711s)
	I0207 21:07:11.351664   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:12.680022   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3283512s)
	I0207 21:07:15.691815   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:17.129370   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4375471s)
	I0207 21:07:20.146839   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:21.587109   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.439204s)
	I0207 21:07:24.597845   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:26.051832   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4529911s)
	I0207 21:07:29.065199   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:30.698318   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.6331099s)
	I0207 21:07:33.713610   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:35.116876   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4032585s)
	I0207 21:07:38.135211   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:39.531041   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3957428s)
	I0207 21:07:42.544645   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:43.911123   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.3664705s)
	I0207 21:07:46.946137   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:48.397951   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4518067s)
	I0207 21:07:51.441673   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:53.560067   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (2.1178306s)
	I0207 21:07:56.588524   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:07:58.256057   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.6673555s)
	I0207 21:08:01.295794   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:08:02.783388   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.4874549s)
	I0207 21:08:05.784291   10352 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0207 21:08:05.784355   10352 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 21:08:05.796201   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:08:07.369204   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.5728917s)
	W0207 21:08:07.369204   10352 delete.go:135] deletehost failed: Docker machine "force-systemd-env-20220207210133-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 21:08:07.378233   10352 cli_runner.go:133] Run: docker container inspect -f {{.Id}} force-systemd-env-20220207210133-8704
	I0207 21:08:09.133904   10352 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220207210133-8704: (1.7554575s)
	I0207 21:08:09.145368   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:08:11.095641   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.9500485s)
	I0207 21:08:11.110027   10352 cli_runner.go:133] Run: docker exec --privileged -t force-systemd-env-20220207210133-8704 /bin/bash -c "sudo init 0"
	W0207 21:08:13.384211   10352 cli_runner.go:180] docker exec --privileged -t force-systemd-env-20220207210133-8704 /bin/bash -c "sudo init 0" returned with exit code 1
	I0207 21:08:13.384496   10352 cli_runner.go:186] Completed: docker exec --privileged -t force-systemd-env-20220207210133-8704 /bin/bash -c "sudo init 0": (2.2741174s)
	I0207 21:08:13.384541   10352 oci.go:659] error shutdown force-systemd-env-20220207210133-8704: docker exec --privileged -t force-systemd-env-20220207210133-8704 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 2bafdd69a272aa303a0936f3f970aa6b1ec3c11ea9c7c25bcfe6899d3ca505d2 is not running
	I0207 21:08:14.406915   10352 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}
	I0207 21:08:16.366553   10352 cli_runner.go:186] Completed: docker container inspect force-systemd-env-20220207210133-8704 --format={{.State.Status}}: (1.959421s)
	I0207 21:08:16.366686   10352 oci.go:673] temporary error: container force-systemd-env-20220207210133-8704 status is  but expect it to be exited
	I0207 21:08:16.366686   10352 oci.go:679] Successfully shutdown container force-systemd-env-20220207210133-8704
	I0207 21:08:16.383819   10352 cli_runner.go:133] Run: docker rm -f -v force-systemd-env-20220207210133-8704
	I0207 21:08:18.629741   10352 cli_runner.go:186] Completed: docker rm -f -v force-systemd-env-20220207210133-8704: (2.2458539s)
	I0207 21:08:18.644892   10352 cli_runner.go:133] Run: docker container inspect -f {{.Id}} force-systemd-env-20220207210133-8704
	W0207 21:08:20.263720   10352 cli_runner.go:180] docker container inspect -f {{.Id}} force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:08:20.263774   10352 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220207210133-8704: (1.6186711s)
	I0207 21:08:20.271177   10352 cli_runner.go:133] Run: docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:08:21.860964   10352 cli_runner.go:180] docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:08:21.861111   10352 cli_runner.go:186] Completed: docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.5892032s)
	I0207 21:08:21.867626   10352 network_create.go:254] running [docker network inspect force-systemd-env-20220207210133-8704] to gather additional debugging logs...
	I0207 21:08:21.867626   10352 cli_runner.go:133] Run: docker network inspect force-systemd-env-20220207210133-8704
	W0207 21:08:23.388346   10352 cli_runner.go:180] docker network inspect force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:08:23.388426   10352 cli_runner.go:186] Completed: docker network inspect force-systemd-env-20220207210133-8704: (1.5204545s)
	I0207 21:08:23.388426   10352 network_create.go:257] error running [docker network inspect force-systemd-env-20220207210133-8704]: docker network inspect force-systemd-env-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220207210133-8704
	I0207 21:08:23.388426   10352 network_create.go:259] output of [docker network inspect force-systemd-env-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220207210133-8704
	
	** /stderr **
	W0207 21:08:23.389550   10352 delete.go:139] delete failed (probably ok) <nil>
	I0207 21:08:23.389675   10352 fix.go:120] Sleeping 1 second for extra luck!
	I0207 21:08:24.414613   10352 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:08:24.419174   10352 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:08:24.419804   10352 start.go:160] libmachine.API.Create for "force-systemd-env-20220207210133-8704" (driver="docker")
	I0207 21:08:24.419926   10352 client.go:168] LocalClient.Create starting
	I0207 21:08:24.420753   10352 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:08:24.421043   10352 main.go:130] libmachine: Decoding PEM data...
	I0207 21:08:24.421213   10352 main.go:130] libmachine: Parsing certificate...
	I0207 21:08:24.421577   10352 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:08:24.421778   10352 main.go:130] libmachine: Decoding PEM data...
	I0207 21:08:24.421778   10352 main.go:130] libmachine: Parsing certificate...
	I0207 21:08:24.434277   10352 cli_runner.go:133] Run: docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:08:25.971502   10352 cli_runner.go:180] docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:08:25.971654   10352 cli_runner.go:186] Completed: docker network inspect force-systemd-env-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.5372171s)
	I0207 21:08:25.979701   10352 network_create.go:254] running [docker network inspect force-systemd-env-20220207210133-8704] to gather additional debugging logs...
	I0207 21:08:25.979701   10352 cli_runner.go:133] Run: docker network inspect force-systemd-env-20220207210133-8704
	W0207 21:08:27.520625   10352 cli_runner.go:180] docker network inspect force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:08:27.520625   10352 cli_runner.go:186] Completed: docker network inspect force-systemd-env-20220207210133-8704: (1.5408588s)
	I0207 21:08:27.520701   10352 network_create.go:257] error running [docker network inspect force-systemd-env-20220207210133-8704]: docker network inspect force-systemd-env-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220207210133-8704
	I0207 21:08:27.520768   10352 network_create.go:259] output of [docker network inspect force-systemd-env-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220207210133-8704
	
	** /stderr **
	I0207 21:08:27.531942   10352 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:08:29.132297   10352 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.6001405s)
	I0207 21:08:29.148100   10352 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e2a8] amended:true}} dirty:map[192.168.49.0:0xc00014e2a8 192.168.58.0:0xc0006862c8] misses:0}
	I0207 21:08:29.148100   10352 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:08:29.148100   10352 network_create.go:106] attempt to create docker network force-systemd-env-20220207210133-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:08:29.158938   10352 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704
	W0207 21:08:30.759543   10352 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:08:30.759682   10352 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704: (1.6005128s)
	W0207 21:08:30.759859   10352 network_create.go:98] failed to create docker network force-systemd-env-20220207210133-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:08:30.782746   10352 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e2a8] amended:true}} dirty:map[192.168.49.0:0xc00014e2a8 192.168.58.0:0xc0006862c8] misses:0}
	I0207 21:08:30.782746   10352 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:08:30.806511   10352 network.go:284] reusing subnet 192.168.58.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e2a8] amended:true}} dirty:map[192.168.49.0:0xc00014e2a8 192.168.58.0:0xc0006862c8] misses:1}
	I0207 21:08:30.807281   10352 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:08:30.807331   10352 network_create.go:106] attempt to create docker network force-systemd-env-20220207210133-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:08:30.817452   10352 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704
	I0207 21:08:33.276989   10352 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220207210133-8704: (2.459361s)
	I0207 21:08:33.277077   10352 network_create.go:90] docker network force-systemd-env-20220207210133-8704 192.168.58.0/24 created
	I0207 21:08:33.277114   10352 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20220207210133-8704" container
	I0207 21:08:33.289489   10352 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:08:34.974495   10352 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.6846958s)
	I0207 21:08:34.982410   10352 cli_runner.go:133] Run: docker volume create force-systemd-env-20220207210133-8704 --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:08:36.845008   10352 cli_runner.go:186] Completed: docker volume create force-systemd-env-20220207210133-8704 --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true: (1.8625257s)
	I0207 21:08:36.845282   10352 oci.go:102] Successfully created a docker volume force-systemd-env-20220207210133-8704
	I0207 21:08:36.861011   10352 cli_runner.go:133] Run: docker run --rm --name force-systemd-env-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --entrypoint /usr/bin/test -v force-systemd-env-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:08:42.682938   10352 cli_runner.go:186] Completed: docker run --rm --name force-systemd-env-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --entrypoint /usr/bin/test -v force-systemd-env-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (5.8218962s)
	I0207 21:08:42.682938   10352 oci.go:106] Successfully prepared a docker volume force-systemd-env-20220207210133-8704
	I0207 21:08:42.682938   10352 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:08:42.682938   10352 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:08:42.689925   10352 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:09:56.492708   10352 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (1m13.8022823s)
	I0207 21:09:56.492934   10352 kic.go:188] duration metric: took 73.809612 seconds to extract preloaded images to volume
	I0207 21:09:56.502778   10352 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:09:59.181780   10352 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.6789244s)
	I0207 21:09:59.182076   10352 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:71 OomKillDisable:true NGoroutines:62 SystemTime:2022-02-07 21:09:57.9372151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:09:59.189130   10352 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:10:01.698403   10352 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.5090984s)
	I0207 21:10:01.705218   10352 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:10:12.290801   10352 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with
exit code 125
	I0207 21:10:12.290909   10352 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (
10.5846717s)
	I0207 21:10:12.291061   10352 client.go:171] LocalClient.Create took 1m47.8705741s
	I0207 21:10:14.305095   10352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:10:14.311994   10352 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704
	W0207 21:10:15.941736   10352 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:10:15.941801   10352 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704: (1.6296908s)
	I0207 21:10:15.941855   10352 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:10:16.242390   10352 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704
	W0207 21:10:17.793714   10352 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:10:17.793714   10352 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704: (1.5511188s)
	W0207 21:10:17.793714   10352 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:10:17.793714   10352 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:10:17.793714   10352 start.go:129] duration metric: createHost completed in 1m53.3785123s
	I0207 21:10:17.801268   10352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:10:17.806915   10352 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704
	W0207 21:10:19.292072   10352 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:10:19.292111   10352 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704: (1.4850551s)
	I0207 21:10:19.292141   10352 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:10:19.542732   10352 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704
	W0207 21:10:20.999175   10352 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704 returned with exit code 1
	I0207 21:10:20.999211   10352 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220207210133-8704: (1.4563011s)
	W0207 21:10:20.999401   10352 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:10:20.999439   10352 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:10:20.999471   10352 fix.go:57] fixHost completed within 6m55.2138018s
	I0207 21:10:20.999471   10352 start.go:80] releasing machines lock for "force-systemd-env-20220207210133-8704", held for 6m55.2138018s
	W0207 21:10:21.000006   10352 out.go:241] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20220207210133-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=12
7.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	c27c17535f9c8b38274408dac5742073211241f28ab320c4f11a35d8abfffbe8
	
	stderr:
	docker: Error response from daemon: network force-systemd-env-20220207210133-8704 not found.
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20220207210133-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 g
cr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	c27c17535f9c8b38274408dac5742073211241f28ab320c4f11a35d8abfffbe8
	
	stderr:
	docker: Error response from daemon: network force-systemd-env-20220207210133-8704 not found.
	
	I0207 21:10:21.004718   10352 out.go:176] 
	W0207 21:10:21.004976   10352 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikub
e/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	c27c17535f9c8b38274408dac5742073211241f28ab320c4f11a35d8abfffbe8
	
	stderr:
	docker: Error response from daemon: network force-systemd-env-20220207210133-8704 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220207210133-8704 --name force-systemd-env-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220207210133-8704 --network force-systemd-env-20220207210133-8704 --ip 192.168.58.2 --volume force-systemd-env-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@
sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	c27c17535f9c8b38274408dac5742073211241f28ab320c4f11a35d8abfffbe8
	
	stderr:
	docker: Error response from daemon: network force-systemd-env-20220207210133-8704 not found.
	
	W0207 21:10:21.005023   10352 out.go:241] * 
	* 
	W0207 21:10:21.006241   10352 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 21:10:21.008380   10352 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:153: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-20220207210133-8704 --memory=2048 --alsologtostderr -v=5 --driver=docker" : exit status 80
docker_test.go:105: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220207210133-8704 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20220207210133-8704 ssh "docker info --format {{.CgroupDriver}}": exit status 85 (3.3824031s)

                                                
                                                
-- stdout --
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p force-systemd-env-20220207210133-8704"

                                                
                                                
-- /stdout --
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-env-20220207210133-8704 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 85
docker_test.go:162: *** TestForceSystemdEnv FAILED at 2022-02-07 21:10:24.594002 +0000 GMT m=+6674.126037101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect force-systemd-env-20220207210133-8704
helpers_test.go:232: (dbg) Done: docker inspect force-systemd-env-20220207210133-8704: (1.4933691s)
helpers_test.go:236: (dbg) docker inspect force-systemd-env-20220207210133-8704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c27c17535f9c8b38274408dac5742073211241f28ab320c4f11a35d8abfffbe8",
	        "Created": "2022-02-07T21:10:03.2064708Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network force-systemd-env-20220207210133-8704 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/force-systemd-env-20220207210133-8704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-20220207210133-8704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-20220207210133-8704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e29988c594b42999bd73f00a14d5ad31f18f8d6c948fc5c94b63e284636351a-init/diff:/var/lib/docker/overlay2/75e1ee3c034aacc956b8c3ecc7ab61ac5c38e660082589ceb37efd240a771cc5/diff:/var/lib/docker/overlay2/189fe0ac50cbe021b1f58d4d3552848c814165ab41c880cc414a3d772ecf8a17/diff:/var/lib/docker/overlay2/1825c50829a945a491708c366e3adc3e6d891ec2fcbd7f13b41f06c64baa55d9/diff:/var/lib/docker/overlay2/0b9358d8d7de1369e9019714824c8f1007b6c08b3ebf296b7b1288610816a2ce/diff:/var/lib/docker/overlay2/689f6514ad269d91cd1861629d1949b077031e825417ef4dfb5621888699407b/diff:/var/lib/docker/overlay2/8dff862a1c6a46807e22567df5955e49a8aa3d0a1f2ad45ca46f2ab5374556fe/diff:/var/lib/docker/overlay2/ee466d69c85d056ef8068fd5652d1a05e5ca08f4f2880d8156cd2f212ceaaaa6/diff:/var/lib/docker/overlay2/86890d1d8e6826b123ee9ec4c463f6f91ad837f07b7147e0c6ef8c7e17b601da/diff:/var/lib/docker/overlay2/b657d041c7bdb28ab2fd58a8e3615ec574e7e5fcace80e88f630332a1ff67ff7/diff:/var/lib/docker/overlay2/4339b0
c7baf085cb3dc647fb19cd967f89fdd4e316e2bc806815c81fc17efc59/diff:/var/lib/docker/overlay2/36993c24ec6e3eb908331a1c00b702e3326415b7124d4d1788747ba328eb6e2a/diff:/var/lib/docker/overlay2/5b68d569c7973aeabb60b4d744a1b86cc3ebb8b284e55bbbe33576e97e3ac021/diff:/var/lib/docker/overlay2/57b6ab85187eac783753b7bdcafb75e9d26d3e9d22b614bbfa42fbf4a6e879f8/diff:/var/lib/docker/overlay2/e5f2f9b80a695305ffbe047f65db35cc276ac41f987ec84a5742b3769918cb79/diff:/var/lib/docker/overlay2/06d7d08e9ebfbe3202537757cc03ccaa87b749e7dd8354ae1978c44a1b14a690/diff:/var/lib/docker/overlay2/44604b9a5d1c918e1d3ebe374cc5b01af83b10aef4cbf54e72d7fd0b7be60646/diff:/var/lib/docker/overlay2/9d28038d0516655f0a12f3ec5220089de0a54540a27220e4f412dd3acc577f9b/diff:/var/lib/docker/overlay2/ec704366d20c2f84ce0d53c1b278507dc9cc66331cba15d90521a96d118d45af/diff:/var/lib/docker/overlay2/32b5b8eb800bf64445a63842604512878f22712d00a869b2104a1b528d6e8010/diff:/var/lib/docker/overlay2/6ff5152a44a5b0fd36c63aa1c7199ff420477113981a2dd750c29f82e1509669/diff:/var/lib/d
ocker/overlay2/b42f3edd75dd995daac9924998fafd7fe1b919f222b8185a3dfeef9a762660c7/diff:/var/lib/docker/overlay2/3cd19c2de3ea2cc271124c2c82db46bf5f550625dd02a5cde5c517af93c73caa/diff:/var/lib/docker/overlay2/b41830a6d20150650c5fb37bb60e7c06147734911fda7300a739cd023bb4789a/diff:/var/lib/docker/overlay2/925bf7a180aeb21aee1f13bf31ccc1f05a642fd383aabb499148885dcac5cfeb/diff:/var/lib/docker/overlay2/a5ec93ff5dc3e9d4a9975d8f1176019d102f9e8c319a4d5016f842be26bb5671/diff:/var/lib/docker/overlay2/37e01c18dc12ba0b9bd89093b244ef29456df1fb30fc4a8c3e5596b7b56ada0a/diff:/var/lib/docker/overlay2/6ce0b6587d0750a0ef5383637b91df31d4c1619e3a494b84c8714c5beebf1dbc/diff:/var/lib/docker/overlay2/8f4e875a02344a4926d7f5ad052151ca0eef0364a189b7ca60ebb338213d7c8e/diff:/var/lib/docker/overlay2/2790936ada4be199505c2cab1447b90a25076c4d2cbceadeb4a52026c71b9c60/diff:/var/lib/docker/overlay2/231fcc4021464c7f510cca7eecaabc94216fcc70cb62f97465c0d546064b25b8/diff:/var/lib/docker/overlay2/30845ecf75e8fd0fa04703004fc686bb8aff8eabe9437f4e7a1096a5bca
060a3/diff:/var/lib/docker/overlay2/3ae1acee47e31df704424e5e9dbaed72199c1cb3a318825a84cc9d2f08f1d807/diff:/var/lib/docker/overlay2/f9fe697b5ffab06c3cc31c3e2b7d924c32d4f0f4ee8fd29cb5e2b46e586b4d4d/diff:/var/lib/docker/overlay2/68afa844b9fe835f1997b14fe394dac6238ee6a39aa0abfc34a93c062d58f819/diff:/var/lib/docker/overlay2/94b84dda68e5a3dbf4319437e5d026f2c5c705496ca2d9922f7e865879146b56/diff:/var/lib/docker/overlay2/f133dd3fe2bf48f8bd9dced36254f4cc973685d2ddde9ee6e0f2467ea7d34592/diff:/var/lib/docker/overlay2/dafd5505dd817285a71ea03b36fb5684a0c844441c07c909d1e6c47b874b33d4/diff:/var/lib/docker/overlay2/c714cab2096f6325d72b4b73673c329c5db40f169c0d6d5d034bf8af87b90983/diff:/var/lib/docker/overlay2/ea71191eaaa01123105da39dc897cb6e11c028c8a2e91dc62ff85bb5e0fb1884/diff:/var/lib/docker/overlay2/6c554fb0a2463d3ef05cdb7858f9788626b5c72dbb4ea5a0431ec665de90dc74/diff:/var/lib/docker/overlay2/01e92d0b67f2be5d7d6ba3f84ffac8ad1e0c516b03b45346070503f62de32e5a/diff:/var/lib/docker/overlay2/f5f6f40c4df999e1ae2e5733fa6aad1cf8963e
bd6e2b9f849164ca5c149a4262/diff:/var/lib/docker/overlay2/e1eb2f89916ebfdb9a8d5aacfd9618edc370a018de0114d193b6069979c02aa7/diff:/var/lib/docker/overlay2/0e35d26329f1b7cf4e1b2bb03588192d3ea37764eab1ccc5a598db2164c932d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e29988c594b42999bd73f00a14d5ad31f18f8d6c948fc5c94b63e284636351a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e29988c594b42999bd73f00a14d5ad31f18f8d6c948fc5c94b63e284636351a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e29988c594b42999bd73f00a14d5ad31f18f8d6c948fc5c94b63e284636351a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-20220207210133-8704",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-20220207210133-8704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-20220207210133-8704",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-20220207210133-8704",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-20220207210133-8704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-20220207210133-8704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220207210133-8704 -n force-systemd-env-20220207210133-8704
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220207210133-8704 -n force-systemd-env-20220207210133-8704: exit status 7 (3.4306365s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "force-systemd-env-20220207210133-8704" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:176: Cleaning up "force-systemd-env-20220207210133-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220207210133-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220207210133-8704: (30.8677868s)
--- FAIL: TestForceSystemdEnv (566.94s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (181.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:181: nginx-svc svc.status.loadBalancer.ingress never got an IP: timed out waiting for the condition
functional_test_tunnel_test.go:182: (dbg) Run:  kubectl --context functional-20220207194118-8704 get svc nginx-svc
functional_test_tunnel_test.go:186: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.97.244.222   <pending>     80:32575/TCP   3m17s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (181.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (83.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220207205647-8704 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220207205647-8704 --no-kubernetes --driver=docker: (41.8108505s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220207205647-8704 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-20220207205647-8704 status -o json: exit status 2 (7.2461525s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220207205647-8704","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-20220207205647-8704

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:125: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p NoKubernetes-20220207205647-8704: exit status 1 (29.6727329s)

                                                
                                                
-- stdout --
	* Deleting "NoKubernetes-20220207205647-8704" in docker ...
	* Deleting container "NoKubernetes-20220207205647-8704" ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:127: failed to delete minikube profile with args: "out/minikube-windows-amd64.exe delete -p NoKubernetes-20220207205647-8704" : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect NoKubernetes-20220207205647-8704
helpers_test.go:232: (dbg) Non-zero exit: docker inspect NoKubernetes-20220207205647-8704: exit status 1 (1.5491411s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220207205647-8704

                                                
                                                
** /stderr **
helpers_test.go:234: failed to get docker inspect: exit status 1
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220207205647-8704 -n NoKubernetes-20220207205647-8704
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220207205647-8704 -n NoKubernetes-20220207205647-8704: exit status 7 (3.5370058s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0207 21:01:52.900084   11960 status.go:247] status error: host: state: unknown state "NoKubernetes-20220207205647-8704": docker container inspect NoKubernetes-20220207205647-8704 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220207205647-8704

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "NoKubernetes-20220207205647-8704" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (83.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (463.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: exit status 80 (7m43.7717237s)

                                                
                                                
-- stdout --
	* [auto-20220207210111-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node auto-20220207210111-8704 in cluster auto-20220207210111-8704
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20220207210111-8704" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 21:18:07.045458    4084 out.go:297] Setting OutFile to fd 1700 ...
	I0207 21:18:07.121446    4084 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:18:07.121446    4084 out.go:310] Setting ErrFile to fd 1720...
	I0207 21:18:07.121446    4084 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:18:07.141464    4084 out.go:304] Setting JSON to false
	I0207 21:18:07.146500    4084 start.go:112] hostinfo: {"hostname":"minikube3","uptime":436306,"bootTime":1643832381,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:18:07.146500    4084 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:18:07.153452    4084 out.go:176] * [auto-20220207210111-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:18:07.153452    4084 notify.go:174] Checking for updates...
	I0207 21:18:07.157464    4084 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:18:07.160466    4084 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:18:07.163468    4084 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:18:07.166454    4084 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:18:07.168451    4084 config.go:176] Loaded profile config "docker-flags-20220207211018-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:18:07.168451    4084 config.go:176] Loaded profile config "pause-20220207211356-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:18:07.168451    4084 config.go:176] Loaded profile config "running-upgrade-20220207211100-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0207 21:18:07.169448    4084 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:18:10.058540    4084 docker.go:132] docker version: linux-20.10.12
	I0207 21:18:10.066268    4084 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:18:12.477625    4084 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4108009s)
	I0207 21:18:12.477625    4084 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:18:11.3237303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:18:12.481620    4084 out.go:176] * Using the docker driver based on user configuration
	I0207 21:18:12.481620    4084 start.go:281] selected driver: docker
	I0207 21:18:12.481620    4084 start.go:798] validating driver "docker" against <nil>
	I0207 21:18:12.481620    4084 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:18:12.541044    4084 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:18:14.875803    4084 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.3347475s)
	I0207 21:18:14.875880    4084 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:18:13.7451521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:18:14.875880    4084 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 21:18:14.876627    4084 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 21:18:14.877251    4084 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 21:18:14.877371    4084 cni.go:93] Creating CNI manager for ""
	I0207 21:18:14.877371    4084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:18:14.877398    4084 start_flags.go:302] config:
	{Name:auto-20220207210111-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:auto-20220207210111-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:18:14.885667    4084 out.go:176] * Starting control plane node auto-20220207210111-8704 in cluster auto-20220207210111-8704
	I0207 21:18:14.885736    4084 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:18:14.887498    4084 out.go:176] * Pulling base image ...
	I0207 21:18:14.888086    4084 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:18:14.888278    4084 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:18:14.888278    4084 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 21:18:14.888278    4084 cache.go:57] Caching tarball of preloaded images
	I0207 21:18:14.888278    4084 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:18:14.888953    4084 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 21:18:14.889702    4084 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-20220207210111-8704\config.json ...
	I0207 21:18:14.890301    4084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-20220207210111-8704\config.json: {Name:mk798890411f535c68f68dbc1e822364bac2bc7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:18:16.167409    4084 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:18:16.167409    4084 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:18:16.167409    4084 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:18:16.167409    4084 start.go:313] acquiring machines lock for auto-20220207210111-8704: {Name:mke2396501227c78c9990cfcbfd3cbd4741eee65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:18:16.167409    4084 start.go:317] acquired machines lock for "auto-20220207210111-8704" in 0s
	I0207 21:18:16.167409    4084 start.go:89] Provisioning new machine with config: &{Name:auto-20220207210111-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:auto-20220207210111-8704 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:18:16.167409    4084 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:18:16.171419    4084 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:18:16.171419    4084 start.go:160] libmachine.API.Create for "auto-20220207210111-8704" (driver="docker")
	I0207 21:18:16.171419    4084 client.go:168] LocalClient.Create starting
	I0207 21:18:16.172407    4084 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:18:16.172407    4084 main.go:130] libmachine: Decoding PEM data...
	I0207 21:18:16.172407    4084 main.go:130] libmachine: Parsing certificate...
	I0207 21:18:16.172407    4084 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:18:16.172407    4084 main.go:130] libmachine: Decoding PEM data...
	I0207 21:18:16.172407    4084 main.go:130] libmachine: Parsing certificate...
	I0207 21:18:16.179407    4084 cli_runner.go:133] Run: docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:18:17.546240    4084 cli_runner.go:180] docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:18:17.546240    4084 cli_runner.go:186] Completed: docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3668258s)
	I0207 21:18:17.553262    4084 network_create.go:254] running [docker network inspect auto-20220207210111-8704] to gather additional debugging logs...
	I0207 21:18:17.553262    4084 cli_runner.go:133] Run: docker network inspect auto-20220207210111-8704
	W0207 21:18:18.876912    4084 cli_runner.go:180] docker network inspect auto-20220207210111-8704 returned with exit code 1
	I0207 21:18:18.876912    4084 cli_runner.go:186] Completed: docker network inspect auto-20220207210111-8704: (1.3236437s)
	I0207 21:18:18.876912    4084 network_create.go:257] error running [docker network inspect auto-20220207210111-8704]: docker network inspect auto-20220207210111-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220207210111-8704
	I0207 21:18:18.876912    4084 network_create.go:259] output of [docker network inspect auto-20220207210111-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220207210111-8704
	
	** /stderr **
	I0207 21:18:18.881909    4084 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:18:20.218354    4084 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.336438s)
	I0207 21:18:20.242523    4084 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005a43c8] misses:0}
	I0207 21:18:20.242523    4084 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:18:20.242523    4084 network_create.go:106] attempt to create docker network auto-20220207210111-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:18:20.248362    4084 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704
	W0207 21:18:21.659696    4084 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704 returned with exit code 1
	I0207 21:18:21.659827    4084 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704: (1.4112603s)
	W0207 21:18:21.660010    4084 network_create.go:98] failed to create docker network auto-20220207210111-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:18:21.680628    4084 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a43c8] amended:false}} dirty:map[] misses:0}
	I0207 21:18:21.680628    4084 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:18:21.701632    4084 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a43c8] amended:true}} dirty:map[192.168.49.0:0xc0005a43c8 192.168.58.0:0xc000772280] misses:0}
	I0207 21:18:21.701632    4084 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:18:21.701632    4084 network_create.go:106] attempt to create docker network auto-20220207210111-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:18:21.707623    4084 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704
	I0207 21:18:23.806069    4084 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704: (2.0984351s)
	I0207 21:18:23.806069    4084 network_create.go:90] docker network auto-20220207210111-8704 192.168.58.0/24 created
	I0207 21:18:23.806069    4084 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20220207210111-8704" container
	I0207 21:18:23.816070    4084 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:18:25.138086    4084 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.3220095s)
	I0207 21:18:25.144350    4084 cli_runner.go:133] Run: docker volume create auto-20220207210111-8704 --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:18:26.457795    4084 cli_runner.go:186] Completed: docker volume create auto-20220207210111-8704 --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true: (1.3134391s)
	I0207 21:18:26.457795    4084 oci.go:102] Successfully created a docker volume auto-20220207210111-8704
	I0207 21:18:26.463851    4084 cli_runner.go:133] Run: docker run --rm --name auto-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --entrypoint /usr/bin/test -v auto-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:18:30.106194    4084 cli_runner.go:186] Completed: docker run --rm --name auto-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --entrypoint /usr/bin/test -v auto-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (3.6423246s)
	I0207 21:18:30.106194    4084 oci.go:106] Successfully prepared a docker volume auto-20220207210111-8704
	I0207 21:18:30.106194    4084 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:18:30.106194    4084 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:18:30.111196    4084 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:19:16.939309    4084 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (46.8278667s)
	I0207 21:19:16.939309    4084 kic.go:188] duration metric: took 46.832868 seconds to extract preloaded images to volume
	I0207 21:19:16.949306    4084 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:19:19.682167    4084 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.7328462s)
	I0207 21:19:19.682167    4084 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:53 OomKillDisable:true NGoroutines:54 SystemTime:2022-02-07 21:19:18.2778341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:19:19.688171    4084 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:19:21.918572    4084 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2303889s)
	I0207 21:19:21.924576    4084 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.58.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:19:24.263569    4084 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.58.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:19:24.263569    4084 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.58.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (2.3389809s)
	I0207 21:19:24.263569    4084 client.go:171] LocalClient.Create took 1m8.0917928s
	I0207 21:19:26.275481    4084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:19:26.281408    4084 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704
	W0207 21:19:27.540485    4084 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704 returned with exit code 1
	I0207 21:19:27.540485    4084 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704: (1.2590701s)
	I0207 21:19:27.540485    4084 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:19:27.823814    4084 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704
	W0207 21:19:29.084672    4084 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704 returned with exit code 1
	I0207 21:19:29.084672    4084 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704: (1.2608507s)
	W0207 21:19:29.084672    4084 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:19:29.084672    4084 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:19:29.084672    4084 start.go:129] duration metric: createHost completed in 1m12.9168799s
	I0207 21:19:29.084672    4084 start.go:80] releasing machines lock for "auto-20220207210111-8704", held for 1m12.9168799s
	W0207 21:19:29.084672    4084 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.58.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit s
tatus 125
	stdout:
	93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a
	
	stderr:
	docker: Error response from daemon: network auto-20220207210111-8704 not found.
	I0207 21:19:29.097676    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:19:30.396328    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2985655s)
	W0207 21:19:30.396599    4084 start.go:575] delete host: Docker machine "auto-20220207210111-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0207 21:19:30.397091    4084 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.58.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cb
e69f5b4a8: exit status 125
	stdout:
	93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a
	
	stderr:
	docker: Error response from daemon: network auto-20220207210111-8704 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.58.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a
	
	stderr:
	docker: Error response from daemon: network auto-20220207210111-8704 not found.
	
	I0207 21:19:30.397091    4084 start.go:585] Will try again in 5 seconds ...
	I0207 21:19:35.397613    4084 start.go:313] acquiring machines lock for auto-20220207210111-8704: {Name:mke2396501227c78c9990cfcbfd3cbd4741eee65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:19:35.397672    4084 start.go:317] acquired machines lock for "auto-20220207210111-8704" in 0s
	I0207 21:19:35.397672    4084 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:19:35.397672    4084 fix.go:55] fixHost starting: 
	I0207 21:19:35.417683    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:19:36.676700    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.258945s)
	I0207 21:19:36.676700    4084 fix.go:108] recreateIfNeeded on auto-20220207210111-8704: state= err=<nil>
	I0207 21:19:36.676700    4084 fix.go:113] machineExists: false. err=machine does not exist
	I0207 21:19:36.684722    4084 out.go:176] * docker "auto-20220207210111-8704" container is missing, will recreate.
	I0207 21:19:36.684722    4084 delete.go:124] DEMOLISHING auto-20220207210111-8704 ...
	I0207 21:19:36.696747    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:19:38.002395    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.305642s)
	I0207 21:19:38.002395    4084 stop.go:79] host is in state 
	I0207 21:19:38.002395    4084 main.go:130] libmachine: Stopping "auto-20220207210111-8704"...
	I0207 21:19:38.016392    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:19:39.311954    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2955551s)
	I0207 21:19:39.330883    4084 kic_runner.go:93] Run: systemctl --version
	I0207 21:19:39.330883    4084 kic_runner.go:114] Args: [docker exec --privileged auto-20220207210111-8704 systemctl --version]
	I0207 21:19:40.881199    4084 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:19:40.881199    4084 kic_runner.go:114] Args: [docker exec --privileged auto-20220207210111-8704 sudo service kubelet stop]
	I0207 21:19:43.193611    4084 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a is not running
	
	** /stderr **
	W0207 21:19:43.193611    4084 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a is not running
	I0207 21:19:43.211593    4084 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:19:43.211593    4084 kic_runner.go:114] Args: [docker exec --privileged auto-20220207210111-8704 sudo service kubelet stop]
	I0207 21:19:44.599905    4084 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a is not running
	
	** /stderr **
	W0207 21:19:44.599905    4084 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a is not running
	I0207 21:19:44.612898    4084 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 21:19:44.612898    4084 kic_runner.go:114] Args: [docker exec --privileged auto-20220207210111-8704 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 21:19:46.008350    4084 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a is not running
	I0207 21:19:46.008350    4084 kic.go:466] successfully stopped kubernetes!
	I0207 21:19:46.022358    4084 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 21:19:46.022358    4084 kic_runner.go:114] Args: [docker exec --privileged auto-20220207210111-8704 pgrep kube-apiserver]
	I0207 21:19:48.823893    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:19:50.090806    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2669073s)
	I0207 21:19:53.103409    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:19:54.383231    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.279816s)
	I0207 21:19:57.390991    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:19:58.749843    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3588445s)
	I0207 21:20:01.764919    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:03.126690    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3616672s)
	I0207 21:20:06.142752    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:07.447608    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3048042s)
	I0207 21:20:10.464450    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:11.863508    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3990503s)
	I0207 21:20:14.878658    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:16.176618    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2979529s)
	I0207 21:20:19.188882    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:20.460515    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2716256s)
	I0207 21:20:23.480685    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:24.750523    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2697867s)
	I0207 21:20:27.766542    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:29.045508    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2789239s)
	I0207 21:20:32.063473    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:33.347288    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2838085s)
	I0207 21:20:36.363702    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:37.638232    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2745231s)
	I0207 21:20:40.651026    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:41.919263    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2679733s)
	I0207 21:20:44.930430    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:46.264264    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3338266s)
	I0207 21:20:49.283003    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:50.629244    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3461081s)
	I0207 21:20:53.646111    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:54.992012    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3458935s)
	I0207 21:20:58.007016    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:20:59.333411    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3263877s)
	I0207 21:21:02.349566    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:03.707057    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.357484s)
	I0207 21:21:06.720630    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:08.001780    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2810926s)
	I0207 21:21:11.018912    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:12.342549    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3234926s)
	I0207 21:21:15.361987    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:16.640658    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2786634s)
	I0207 21:21:19.654877    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:21.041613    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.386661s)
	I0207 21:21:24.055714    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:25.424077    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3683558s)
	I0207 21:21:28.437012    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:29.732279    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2952157s)
	I0207 21:21:32.748153    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:34.015557    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2667173s)
	I0207 21:21:37.029197    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:38.248936    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2197321s)
	I0207 21:21:41.263789    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:42.509735    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2458521s)
	I0207 21:21:45.523484    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:46.807170    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2836222s)
	I0207 21:21:49.821537    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:51.146564    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3248527s)
	I0207 21:21:54.165067    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:55.528693    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.363619s)
	I0207 21:21:58.542609    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:21:59.923112    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3804962s)
	I0207 21:22:02.943439    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:04.201940    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2584936s)
	I0207 21:22:07.213358    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:08.505389    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2919894s)
	I0207 21:22:11.538557    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:12.897938    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3592933s)
	I0207 21:22:15.910858    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:17.155356    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2444915s)
	I0207 21:22:20.171145    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:21.580520    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.4092935s)
	I0207 21:22:24.597135    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:25.973137    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3757478s)
	I0207 21:22:28.985744    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:30.326014    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3402629s)
	I0207 21:22:33.342993    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:34.609815    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2668153s)
	I0207 21:22:37.624340    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:39.236342    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.6119605s)
	I0207 21:22:42.249960    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:43.651593    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.4016257s)
	I0207 21:22:46.667906    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:48.311345    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.6434303s)
	I0207 21:22:51.334626    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:52.755402    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.4207685s)
	I0207 21:22:55.765103    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:22:57.147824    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3827142s)
	I0207 21:23:00.163201    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:01.528120    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3648619s)
	I0207 21:23:04.542579    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:05.941364    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3987781s)
	I0207 21:23:08.959770    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:10.540196    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.5804175s)
	I0207 21:23:13.553538    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:14.916454    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3627257s)
	I0207 21:23:17.941850    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:19.645740    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.7038815s)
	I0207 21:23:22.658114    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:24.022172    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3639316s)
	I0207 21:23:27.038533    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:28.405980    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3673547s)
	I0207 21:23:31.422438    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:32.884335    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.4618899s)
	I0207 21:23:35.897805    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:37.276740    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3789283s)
	I0207 21:23:40.289506    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:41.679516    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3897984s)
	I0207 21:23:44.691509    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:46.063518    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3720018s)
	I0207 21:23:49.078577    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:50.814205    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.7356189s)
	I0207 21:23:53.837585    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:55.215941    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3783484s)
	I0207 21:23:58.229717    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:23:59.568867    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3391428s)
	I0207 21:24:02.581316    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:24:03.883147    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3018246s)
	I0207 21:24:06.896594    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:24:08.169153    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2724823s)
	I0207 21:24:11.170638    4084 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0207 21:24:11.170638    4084 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 21:24:11.183734    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:24:12.482878    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2991371s)
	W0207 21:24:12.483136    4084 delete.go:135] deletehost failed: Docker machine "auto-20220207210111-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 21:24:12.491634    4084 cli_runner.go:133] Run: docker container inspect -f {{.Id}} auto-20220207210111-8704
	I0207 21:24:13.836637    4084 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} auto-20220207210111-8704: (1.3449413s)
	I0207 21:24:13.843603    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:24:15.132076    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.2884655s)
	I0207 21:24:15.138503    4084 cli_runner.go:133] Run: docker exec --privileged -t auto-20220207210111-8704 /bin/bash -c "sudo init 0"
	W0207 21:24:16.565501    4084 cli_runner.go:180] docker exec --privileged -t auto-20220207210111-8704 /bin/bash -c "sudo init 0" returned with exit code 1
	I0207 21:24:16.565573    4084 cli_runner.go:186] Completed: docker exec --privileged -t auto-20220207210111-8704 /bin/bash -c "sudo init 0": (1.4269412s)
	I0207 21:24:16.565824    4084 oci.go:659] error shutdown auto-20220207210111-8704: docker exec --privileged -t auto-20220207210111-8704 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 93196f62a388d45381d1a6fd8ebf50cffe4a58c53c22231852609eb1ec43275a is not running
	I0207 21:24:17.574681    4084 cli_runner.go:133] Run: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}
	I0207 21:24:18.896582    4084 cli_runner.go:186] Completed: docker container inspect auto-20220207210111-8704 --format={{.State.Status}}: (1.3218941s)
	I0207 21:24:18.896582    4084 oci.go:673] temporary error: container auto-20220207210111-8704 status is  but expect it to be exited
	I0207 21:24:18.896582    4084 oci.go:679] Successfully shutdown container auto-20220207210111-8704
	I0207 21:24:18.901546    4084 cli_runner.go:133] Run: docker rm -f -v auto-20220207210111-8704
	I0207 21:24:20.367529    4084 cli_runner.go:186] Completed: docker rm -f -v auto-20220207210111-8704: (1.4659754s)
	I0207 21:24:20.375026    4084 cli_runner.go:133] Run: docker container inspect -f {{.Id}} auto-20220207210111-8704
	W0207 21:24:21.745404    4084 cli_runner.go:180] docker container inspect -f {{.Id}} auto-20220207210111-8704 returned with exit code 1
	I0207 21:24:21.745404    4084 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} auto-20220207210111-8704: (1.3703707s)
	I0207 21:24:21.751395    4084 cli_runner.go:133] Run: docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:24:23.117669    4084 cli_runner.go:180] docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:24:23.117828    4084 cli_runner.go:186] Completed: docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3662665s)
	I0207 21:24:23.124813    4084 network_create.go:254] running [docker network inspect auto-20220207210111-8704] to gather additional debugging logs...
	I0207 21:24:23.124898    4084 cli_runner.go:133] Run: docker network inspect auto-20220207210111-8704
	W0207 21:24:24.443522    4084 cli_runner.go:180] docker network inspect auto-20220207210111-8704 returned with exit code 1
	I0207 21:24:24.443522    4084 cli_runner.go:186] Completed: docker network inspect auto-20220207210111-8704: (1.3185856s)
	I0207 21:24:24.443522    4084 network_create.go:257] error running [docker network inspect auto-20220207210111-8704]: docker network inspect auto-20220207210111-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220207210111-8704
	I0207 21:24:24.443522    4084 network_create.go:259] output of [docker network inspect auto-20220207210111-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220207210111-8704
	
	** /stderr **
	W0207 21:24:24.444740    4084 delete.go:139] delete failed (probably ok) <nil>
	I0207 21:24:24.444869    4084 fix.go:120] Sleeping 1 second for extra luck!
	I0207 21:24:25.446283    4084 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:24:25.450568    4084 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:24:25.451167    4084 start.go:160] libmachine.API.Create for "auto-20220207210111-8704" (driver="docker")
	I0207 21:24:25.451167    4084 client.go:168] LocalClient.Create starting
	I0207 21:24:25.451840    4084 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:24:25.452043    4084 main.go:130] libmachine: Decoding PEM data...
	I0207 21:24:25.452138    4084 main.go:130] libmachine: Parsing certificate...
	I0207 21:24:25.452330    4084 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:24:25.452549    4084 main.go:130] libmachine: Decoding PEM data...
	I0207 21:24:25.452549    4084 main.go:130] libmachine: Parsing certificate...
	I0207 21:24:25.460864    4084 cli_runner.go:133] Run: docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:24:26.920733    4084 cli_runner.go:180] docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:24:26.921023    4084 cli_runner.go:186] Completed: docker network inspect auto-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.4597161s)
	I0207 21:24:26.933139    4084 network_create.go:254] running [docker network inspect auto-20220207210111-8704] to gather additional debugging logs...
	I0207 21:24:26.933139    4084 cli_runner.go:133] Run: docker network inspect auto-20220207210111-8704
	W0207 21:24:28.328314    4084 cli_runner.go:180] docker network inspect auto-20220207210111-8704 returned with exit code 1
	I0207 21:24:28.328665    4084 cli_runner.go:186] Completed: docker network inspect auto-20220207210111-8704: (1.395167s)
	I0207 21:24:28.328665    4084 network_create.go:257] error running [docker network inspect auto-20220207210111-8704]: docker network inspect auto-20220207210111-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220207210111-8704
	I0207 21:24:28.328829    4084 network_create.go:259] output of [docker network inspect auto-20220207210111-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220207210111-8704
	
	** /stderr **
	I0207 21:24:28.336862    4084 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:24:29.663667    4084 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3267236s)
	I0207 21:24:29.695412    4084 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a43c8] amended:true}} dirty:map[192.168.49.0:0xc0005a43c8 192.168.58.0:0xc000772280] misses:0}
	I0207 21:24:29.695412    4084 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:24:29.695412    4084 network_create.go:106] attempt to create docker network auto-20220207210111-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:24:29.704478    4084 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704
	W0207 21:24:31.365411    4084 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704 returned with exit code 1
	I0207 21:24:31.365411    4084 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704: (1.6609238s)
	W0207 21:24:31.365411    4084 network_create.go:98] failed to create docker network auto-20220207210111-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:24:31.391400    4084 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a43c8] amended:true}} dirty:map[192.168.49.0:0xc0005a43c8 192.168.58.0:0xc000772280] misses:0}
	I0207 21:24:31.391400    4084 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:24:31.417385    4084 network.go:284] reusing subnet 192.168.58.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a43c8] amended:true}} dirty:map[192.168.49.0:0xc0005a43c8 192.168.58.0:0xc000772280] misses:1}
	I0207 21:24:31.417385    4084 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:24:31.417385    4084 network_create.go:106] attempt to create docker network auto-20220207210111-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:24:31.431392    4084 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704
	W0207 21:24:33.145508    4084 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704 returned with exit code 1
	I0207 21:24:33.145596    4084 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704: (1.7141074s)
	W0207 21:24:33.146110    4084 network_create.go:98] failed to create docker network auto-20220207210111-8704 192.168.58.0/24, will retry: subnet is taken
	I0207 21:24:33.179120    4084 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a43c8 192.168.58.0:0xc000772280] amended:false}} dirty:map[] misses:0}
	I0207 21:24:33.179120    4084 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:24:33.197691    4084 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a43c8 192.168.58.0:0xc000772280] amended:true}} dirty:map[192.168.49.0:0xc0005a43c8 192.168.58.0:0xc000772280 192.168.67.0:0xc000006200] misses:0}
	I0207 21:24:33.197691    4084 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:24:33.197691    4084 network_create.go:106] attempt to create docker network auto-20220207210111-8704 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0207 21:24:33.203147    4084 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704
	I0207 21:24:35.561248    4084 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220207210111-8704: (2.357878s)
	I0207 21:24:35.561355    4084 network_create.go:90] docker network auto-20220207210111-8704 192.168.67.0/24 created
	I0207 21:24:35.561355    4084 kic.go:106] calculated static IP "192.168.67.2" for the "auto-20220207210111-8704" container
	I0207 21:24:35.571061    4084 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:24:37.010170    4084 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.4381265s)
	I0207 21:24:37.018182    4084 cli_runner.go:133] Run: docker volume create auto-20220207210111-8704 --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:24:38.352058    4084 cli_runner.go:186] Completed: docker volume create auto-20220207210111-8704 --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true: (1.3337528s)
	I0207 21:24:38.352058    4084 oci.go:102] Successfully created a docker volume auto-20220207210111-8704
	I0207 21:24:38.359972    4084 cli_runner.go:133] Run: docker run --rm --name auto-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --entrypoint /usr/bin/test -v auto-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:24:42.777495    4084 cli_runner.go:186] Completed: docker run --rm --name auto-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --entrypoint /usr/bin/test -v auto-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (4.4174993s)
	I0207 21:24:42.777495    4084 oci.go:106] Successfully prepared a docker volume auto-20220207210111-8704
	I0207 21:24:42.777495    4084 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:24:42.777495    4084 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:24:42.783161    4084 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:25:35.050030    4084 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (52.2665946s)
	I0207 21:25:35.050030    4084 kic.go:188] duration metric: took 52.272261 seconds to extract preloaded images to volume
	I0207 21:25:35.061031    4084 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:25:37.563504    4084 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.5024597s)
	I0207 21:25:37.563504    4084 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:51 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:25:36.4650765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:25:37.569504    4084 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:25:39.836196    4084 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.266637s)
	I0207 21:25:39.844992    4084 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.67.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:25:42.294030    4084 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.67.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:25:42.294288    4084 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.67.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (2.4487101s)
	I0207 21:25:42.294422    4084 client.go:171] LocalClient.Create took 1m16.8427613s
	I0207 21:25:44.302650    4084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:25:44.307407    4084 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704
	W0207 21:25:45.735095    4084 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704 returned with exit code 1
	I0207 21:25:45.735095    4084 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704: (1.4265111s)
	I0207 21:25:45.735278    4084 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:25:46.038830    4084 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704
	W0207 21:25:47.473708    4084 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704 returned with exit code 1
	I0207 21:25:47.473708    4084 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704: (1.434661s)
	W0207 21:25:47.473839    4084 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:25:47.473839    4084 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:25:47.473839    4084 start.go:129] duration metric: createHost completed in 1m22.0270712s
	I0207 21:25:47.482363    4084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:25:47.487823    4084 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704
	W0207 21:25:48.896795    4084 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704 returned with exit code 1
	I0207 21:25:48.896795    4084 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704: (1.4089648s)
	I0207 21:25:48.896795    4084 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:25:49.134776    4084 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704
	W0207 21:25:50.541952    4084 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704 returned with exit code 1
	I0207 21:25:50.542123    4084 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220207210111-8704: (1.4071115s)
	W0207 21:25:50.542307    4084 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:25:50.542307    4084 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:25:50.542307    4084 fix.go:57] fixHost completed within 6m15.142651s
	I0207 21:25:50.542437    4084 start.go:80] releasing machines lock for "auto-20220207210111-8704", held for 6m15.1427807s
	W0207 21:25:50.543041    4084 out.go:241] * Failed to start docker container. Running "minikube delete -p auto-20220207210111-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.67.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823
806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	3d69c55a34a09ba02b44f953e0998137e1351b5ad3ea0104ab0befc2183e4e74
	
	stderr:
	docker: Error response from daemon: network auto-20220207210111-8704 not found.
	
	* Failed to start docker container. Running "minikube delete -p auto-20220207210111-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.67.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f65
8fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	3d69c55a34a09ba02b44f953e0998137e1351b5ad3ea0104ab0befc2183e4e74
	
	stderr:
	docker: Error response from daemon: network auto-20220207210111-8704 not found.
	
	I0207 21:25:50.549296    4084 out.go:176] 
	W0207 21:25:50.549628    4084 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.67.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1
f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	3d69c55a34a09ba02b44f953e0998137e1351b5ad3ea0104ab0befc2183e4e74
	
	stderr:
	docker: Error response from daemon: network auto-20220207210111-8704 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220207210111-8704 --name auto-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220207210111-8704 --network auto-20220207210111-8704 --ip 192.168.67.2 --volume auto-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit
status 125
	stdout:
	3d69c55a34a09ba02b44f953e0998137e1351b5ad3ea0104ab0befc2183e4e74
	
	stderr:
	docker: Error response from daemon: network auto-20220207210111-8704 not found.
	
	W0207 21:25:50.549682    4084 out.go:241] * 
	* 
	W0207 21:25:50.551714    4084 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 21:25:50.554350    4084 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (463.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (461.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (7m41.4802254s)

                                                
                                                
-- stdout --
	* [cilium-20220207210133-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node cilium-20220207210133-8704 in cluster cilium-20220207210133-8704
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220207210133-8704" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 21:19:11.930479    8152 out.go:297] Setting OutFile to fd 1784 ...
	I0207 21:19:12.001585    8152 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:19:12.001585    8152 out.go:310] Setting ErrFile to fd 1552...
	I0207 21:19:12.001585    8152 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:19:12.017931    8152 out.go:304] Setting JSON to false
	I0207 21:19:12.019573    8152 start.go:112] hostinfo: {"hostname":"minikube3","uptime":436371,"bootTime":1643832381,"procs":159,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:19:12.020588    8152 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:19:12.215747    8152 out.go:176] * [cilium-20220207210133-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:19:12.217012    8152 notify.go:174] Checking for updates...
	I0207 21:19:12.463665    8152 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:19:12.564319    8152 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:19:12.759950    8152 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:19:13.870193    8152 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:19:13.872879    8152 config.go:176] Loaded profile config "auto-20220207210111-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:19:13.874363    8152 config.go:176] Loaded profile config "false-20220207210133-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:19:13.875080    8152 config.go:176] Loaded profile config "pause-20220207211356-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:19:13.875253    8152 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:19:16.586023    8152 docker.go:132] docker version: linux-20.10.12
	I0207 21:19:16.592684    8152 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:19:18.949975    8152 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.3572789s)
	I0207 21:19:18.950947    8152 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:54 OomKillDisable:true NGoroutines:55 SystemTime:2022-02-07 21:19:17.8717534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:19:18.953997    8152 out.go:176] * Using the docker driver based on user configuration
	I0207 21:19:18.953997    8152 start.go:281] selected driver: docker
	I0207 21:19:18.953997    8152 start.go:798] validating driver "docker" against <nil>
	I0207 21:19:18.954745    8152 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:19:19.026086    8152 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:19:21.286161    8152 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.260063s)
	I0207 21:19:21.286240    8152 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:47 SystemTime:2022-02-07 21:19:20.19098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64
IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bp
s_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:19:21.286240    8152 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 21:19:21.286824    8152 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 21:19:21.286824    8152 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 21:19:21.286824    8152 cni.go:93] Creating CNI manager for "cilium"
	I0207 21:19:21.286824    8152 start_flags.go:297] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0207 21:19:21.286824    8152 start_flags.go:302] config:
	{Name:cilium-20220207210133-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:cilium-20220207210133-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:19:21.290861    8152 out.go:176] * Starting control plane node cilium-20220207210133-8704 in cluster cilium-20220207210133-8704
	I0207 21:19:21.291089    8152 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:19:21.294243    8152 out.go:176] * Pulling base image ...
	I0207 21:19:21.294243    8152 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:19:21.294243    8152 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:19:21.294243    8152 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 21:19:21.294243    8152 cache.go:57] Caching tarball of preloaded images
	I0207 21:19:21.294243    8152 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:19:21.294243    8152 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 21:19:21.294243    8152 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\cilium-20220207210133-8704\config.json ...
	I0207 21:19:21.295842    8152 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\cilium-20220207210133-8704\config.json: {Name:mkebedda8cdae29016a421ac3b46837fff248fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:19:22.550178    8152 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:19:22.550178    8152 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:19:22.550178    8152 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:19:22.550178    8152 start.go:313] acquiring machines lock for cilium-20220207210133-8704: {Name:mk87ff27baf92d0ad2a5cf91b1d9697c23fa7d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:19:22.550178    8152 start.go:317] acquired machines lock for "cilium-20220207210133-8704" in 0s
	I0207 21:19:22.550178    8152 start.go:89] Provisioning new machine with config: &{Name:cilium-20220207210133-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:cilium-20220207210133-8704 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:19:22.550178    8152 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:19:22.554149    8152 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:19:22.555167    8152 start.go:160] libmachine.API.Create for "cilium-20220207210133-8704" (driver="docker")
	I0207 21:19:22.555167    8152 client.go:168] LocalClient.Create starting
	I0207 21:19:22.555167    8152 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:19:22.556150    8152 main.go:130] libmachine: Decoding PEM data...
	I0207 21:19:22.556150    8152 main.go:130] libmachine: Parsing certificate...
	I0207 21:19:22.556150    8152 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:19:22.556150    8152 main.go:130] libmachine: Decoding PEM data...
	I0207 21:19:22.556150    8152 main.go:130] libmachine: Parsing certificate...
	I0207 21:19:22.563151    8152 cli_runner.go:133] Run: docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:19:23.942672    8152 cli_runner.go:180] docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:19:23.942672    8152 cli_runner.go:186] Completed: docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3795141s)
	I0207 21:19:23.947685    8152 network_create.go:254] running [docker network inspect cilium-20220207210133-8704] to gather additional debugging logs...
	I0207 21:19:23.947685    8152 cli_runner.go:133] Run: docker network inspect cilium-20220207210133-8704
	W0207 21:19:25.227592    8152 cli_runner.go:180] docker network inspect cilium-20220207210133-8704 returned with exit code 1
	I0207 21:19:25.227592    8152 cli_runner.go:186] Completed: docker network inspect cilium-20220207210133-8704: (1.2799005s)
	I0207 21:19:25.227592    8152 network_create.go:257] error running [docker network inspect cilium-20220207210133-8704]: docker network inspect cilium-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220207210133-8704
	I0207 21:19:25.227592    8152 network_create.go:259] output of [docker network inspect cilium-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220207210133-8704
	
	** /stderr **
	I0207 21:19:25.233693    8152 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:19:26.449689    8152 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2153637s)
	I0207 21:19:26.472163    8152 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000ad40a8] misses:0}
	I0207 21:19:26.472163    8152 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:19:26.472163    8152 network_create.go:106] attempt to create docker network cilium-20220207210133-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:19:26.479163    8152 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220207210133-8704
	W0207 21:19:27.768872    8152 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220207210133-8704 returned with exit code 1
	I0207 21:19:27.768872    8152 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220207210133-8704: (1.2896696s)
	W0207 21:19:27.768872    8152 network_create.go:98] failed to create docker network cilium-20220207210133-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:19:27.793343    8152 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ad40a8] amended:false}} dirty:map[] misses:0}
	I0207 21:19:27.793343    8152 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:19:27.812603    8152 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ad40a8] amended:true}} dirty:map[192.168.49.0:0xc000ad40a8 192.168.58.0:0xc000ad4a38] misses:0}
	I0207 21:19:27.813611    8152 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:19:27.813611    8152 network_create.go:106] attempt to create docker network cilium-20220207210133-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:19:27.820527    8152 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220207210133-8704
	I0207 21:19:30.023849    8152 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220207210133-8704: (2.2032189s)
	I0207 21:19:30.023994    8152 network_create.go:90] docker network cilium-20220207210133-8704 192.168.58.0/24 created
	I0207 21:19:30.023994    8152 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20220207210133-8704" container
	I0207 21:19:30.036125    8152 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:19:31.326494    8152 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.2901915s)
	I0207 21:19:31.332660    8152 cli_runner.go:133] Run: docker volume create cilium-20220207210133-8704 --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:19:32.597723    8152 cli_runner.go:186] Completed: docker volume create cilium-20220207210133-8704 --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true: (1.2650562s)
	I0207 21:19:32.598034    8152 oci.go:102] Successfully created a docker volume cilium-20220207210133-8704
	I0207 21:19:32.604953    8152 cli_runner.go:133] Run: docker run --rm --name cilium-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --entrypoint /usr/bin/test -v cilium-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:19:37.998433    8152 cli_runner.go:186] Completed: docker run --rm --name cilium-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --entrypoint /usr/bin/test -v cilium-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (5.3934512s)
	I0207 21:19:37.998433    8152 oci.go:106] Successfully prepared a docker volume cilium-20220207210133-8704
	I0207 21:19:37.998433    8152 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:19:37.998433    8152 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:19:38.007386    8152 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:20:23.636960    8152 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (45.6293337s)
	I0207 21:20:23.636960    8152 kic.go:188] duration metric: took 45.638287 seconds to extract preloaded images to volume
	I0207 21:20:23.645675    8152 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:20:25.866295    8152 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.2204508s)
	I0207 21:20:25.866362    8152 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2022-02-07 21:20:24.8260609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:20:25.873453    8152 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:20:28.009329    8152 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1357365s)
	I0207 21:20:28.014332    8152 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.58.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:20:30.271681    8152 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.58.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:20:30.271681    8152 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.58.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (2.2573365s)
	I0207 21:20:30.271681    8152 client.go:171] LocalClient.Create took 1m7.7161565s
	I0207 21:20:32.282137    8152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:20:32.289392    8152 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704
	W0207 21:20:33.592048    8152 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704 returned with exit code 1
	I0207 21:20:33.592140    8152 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704: (1.3026486s)
	I0207 21:20:33.592386    8152 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:20:33.880673    8152 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704
	W0207 21:20:35.194076    8152 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704 returned with exit code 1
	I0207 21:20:35.194076    8152 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704: (1.3132579s)
	W0207 21:20:35.194076    8152 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:20:35.194296    8152 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:20:35.194343    8152 start.go:129] duration metric: createHost completed in 1m12.6437817s
	I0207 21:20:35.194343    8152 start.go:80] releasing machines lock for "cilium-20220207210133-8704", held for 1m12.6437817s
	W0207 21:20:35.194853    8152 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.58.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5
b4a8: exit status 125
	stdout:
	f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8
	
	stderr:
	docker: Error response from daemon: network cilium-20220207210133-8704 not found.
	I0207 21:20:35.207110    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:20:36.471723    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2646069s)
	W0207 21:20:36.471723    8152 start.go:575] delete host: Docker machine "cilium-20220207210133-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0207 21:20:36.471723    8152 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.58.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3
e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8
	
	stderr:
	docker: Error response from daemon: network cilium-20220207210133-8704 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.58.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8
	
	stderr:
	docker: Error response from daemon: network cilium-20220207210133-8704 not found.
	
	I0207 21:20:36.471723    8152 start.go:585] Will try again in 5 seconds ...
	I0207 21:20:41.473883    8152 start.go:313] acquiring machines lock for cilium-20220207210133-8704: {Name:mk87ff27baf92d0ad2a5cf91b1d9697c23fa7d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:20:41.474352    8152 start.go:317] acquired machines lock for "cilium-20220207210133-8704" in 272.8µs
	I0207 21:20:41.474554    8152 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:20:41.474554    8152 fix.go:55] fixHost starting: 
	I0207 21:20:41.485741    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:20:42.787087    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.301249s)
	I0207 21:20:42.787087    8152 fix.go:108] recreateIfNeeded on cilium-20220207210133-8704: state= err=<nil>
	I0207 21:20:42.787165    8152 fix.go:113] machineExists: false. err=machine does not exist
	I0207 21:20:42.790617    8152 out.go:176] * docker "cilium-20220207210133-8704" container is missing, will recreate.
	I0207 21:20:42.790675    8152 delete.go:124] DEMOLISHING cilium-20220207210133-8704 ...
	I0207 21:20:42.801807    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:20:44.078517    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2765665s)
	I0207 21:20:44.078555    8152 stop.go:79] host is in state 
	I0207 21:20:44.078640    8152 main.go:130] libmachine: Stopping "cilium-20220207210133-8704"...
	I0207 21:20:44.089856    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:20:45.365676    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2757563s)
	I0207 21:20:45.384020    8152 kic_runner.go:93] Run: systemctl --version
	I0207 21:20:45.384020    8152 kic_runner.go:114] Args: [docker exec --privileged cilium-20220207210133-8704 systemctl --version]
	I0207 21:20:46.827453    8152 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:20:46.827453    8152 kic_runner.go:114] Args: [docker exec --privileged cilium-20220207210133-8704 sudo service kubelet stop]
	I0207 21:20:48.211580    8152 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8 is not running
	
	** /stderr **
	W0207 21:20:48.211630    8152 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8 is not running
	I0207 21:20:48.224639    8152 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:20:48.224639    8152 kic_runner.go:114] Args: [docker exec --privileged cilium-20220207210133-8704 sudo service kubelet stop]
	I0207 21:20:49.623112    8152 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8 is not running
	
	** /stderr **
	W0207 21:20:49.623183    8152 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8 is not running
	I0207 21:20:49.635894    8152 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 21:20:49.635894    8152 kic_runner.go:114] Args: [docker exec --privileged cilium-20220207210133-8704 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 21:20:51.132853    8152 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8 is not running
	I0207 21:20:51.132954    8152 kic.go:466] successfully stopped kubernetes!
	I0207 21:20:51.150603    8152 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 21:20:51.150603    8152 kic_runner.go:114] Args: [docker exec --privileged cilium-20220207210133-8704 pgrep kube-apiserver]
	I0207 21:20:53.902161    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:20:55.245110    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3428589s)
	I0207 21:20:58.258709    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:20:59.631910    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3731373s)
	I0207 21:21:02.644522    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:03.964708    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3201796s)
	I0207 21:21:06.978675    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:08.285425    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3067435s)
	I0207 21:21:11.298031    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:12.609417    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3112616s)
	I0207 21:21:15.622816    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:16.917336    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2942535s)
	I0207 21:21:19.929759    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:21.349418    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4195325s)
	I0207 21:21:24.364740    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:25.751758    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.387011s)
	I0207 21:21:28.764568    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:30.116533    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3519577s)
	I0207 21:21:33.127849    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:34.437900    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3100438s)
	I0207 21:21:37.449128    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:38.703255    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2540437s)
	I0207 21:21:41.715803    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:42.975633    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2598231s)
	I0207 21:21:45.989890    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:47.298982    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3087367s)
	I0207 21:21:50.311827    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:51.623237    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.311403s)
	I0207 21:21:54.638264    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:21:55.999302    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3608523s)
	I0207 21:21:59.016324    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:00.387034    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3707029s)
	I0207 21:22:03.403229    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:04.733812    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3305762s)
	I0207 21:22:07.747885    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:09.049764    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3018723s)
	I0207 21:22:12.063521    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:13.401135    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3374391s)
	I0207 21:22:16.415521    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:17.741714    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3261857s)
	I0207 21:22:20.755056    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:22.179552    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4236052s)
	I0207 21:22:25.192552    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:26.596511    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4038143s)
	I0207 21:22:29.607298    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:30.966947    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3595556s)
	I0207 21:22:33.977300    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:35.289469    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.312162s)
	I0207 21:22:38.301537    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:39.943835    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.6422525s)
	I0207 21:22:42.955326    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:44.374072    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4187381s)
	I0207 21:22:47.395077    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:49.065014    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.6698095s)
	I0207 21:22:52.079731    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:53.560538    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.480799s)
	I0207 21:22:56.573907    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:22:57.982483    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4084333s)
	I0207 21:23:00.993720    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:02.355270    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3610193s)
	I0207 21:23:05.373605    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:06.842420    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4678139s)
	I0207 21:23:09.859162    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:11.359091    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4999203s)
	I0207 21:23:14.371342    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:15.818250    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4468508s)
	I0207 21:23:18.852569    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:20.488210    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.6356324s)
	I0207 21:23:23.500842    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:24.897970    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.397121s)
	I0207 21:23:27.908498    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:29.295610    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3869807s)
	I0207 21:23:32.309469    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:33.779170    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4690649s)
	I0207 21:23:36.790421    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:38.211922    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4214929s)
	I0207 21:23:41.238362    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:42.680396    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4418794s)
	I0207 21:23:45.696046    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:47.066502    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3704492s)
	I0207 21:23:50.080272    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:51.664952    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.5846715s)
	I0207 21:23:54.680410    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:23:56.035763    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3550618s)
	I0207 21:23:59.048899    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:00.455861    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.406891s)
	I0207 21:24:03.468459    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:04.868130    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3996638s)
	I0207 21:24:07.878905    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:09.220917    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.341975s)
	I0207 21:24:12.232594    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:13.553006    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3204047s)
	I0207 21:24:16.570543    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:17.857049    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2864329s)
	I0207 21:24:20.873654    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:22.308241    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4345787s)
	I0207 21:24:25.321208    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:26.804475    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4832584s)
	I0207 21:24:29.814826    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:31.632410    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.8175748s)
	I0207 21:24:34.649045    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:36.064033    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4148105s)
	I0207 21:24:39.075430    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:40.501443    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4260062s)
	I0207 21:24:43.516160    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:44.942661    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4264936s)
	I0207 21:24:47.956583    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:49.265554    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3089636s)
	I0207 21:24:52.280104    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:53.665084    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3849721s)
	I0207 21:24:56.677052    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:24:58.089416    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4122519s)
	I0207 21:25:01.106780    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:25:02.545449    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.4385516s)
	I0207 21:25:05.557467    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:25:06.799027    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2412204s)
	I0207 21:25:09.891090    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:25:11.241590    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3503485s)
	I0207 21:25:14.253276    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:25:15.605216    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3518101s)
	I0207 21:25:18.605878    8152 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0207 21:25:18.605964    8152 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 21:25:18.617123    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:25:19.941779    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3245488s)
	W0207 21:25:19.941968    8152 delete.go:135] deletehost failed: Docker machine "cilium-20220207210133-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 21:25:19.947646    8152 cli_runner.go:133] Run: docker container inspect -f {{.Id}} cilium-20220207210133-8704
	I0207 21:25:21.372324    8152 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} cilium-20220207210133-8704: (1.4235309s)
	I0207 21:25:21.379299    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:25:22.718413    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.3391073s)
	I0207 21:25:22.723430    8152 cli_runner.go:133] Run: docker exec --privileged -t cilium-20220207210133-8704 /bin/bash -c "sudo init 0"
	W0207 21:25:24.220710    8152 cli_runner.go:180] docker exec --privileged -t cilium-20220207210133-8704 /bin/bash -c "sudo init 0" returned with exit code 1
	I0207 21:25:24.220710    8152 cli_runner.go:186] Completed: docker exec --privileged -t cilium-20220207210133-8704 /bin/bash -c "sudo init 0": (1.4972727s)
	I0207 21:25:24.220710    8152 oci.go:659] error shutdown cilium-20220207210133-8704: docker exec --privileged -t cilium-20220207210133-8704 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container f6185c0db1b8a7c2bb0337f982092be194e5e86deb05b8876274076052b327f8 is not running
	I0207 21:25:25.226994    8152 cli_runner.go:133] Run: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}
	I0207 21:25:26.498039    8152 cli_runner.go:186] Completed: docker container inspect cilium-20220207210133-8704 --format={{.State.Status}}: (1.2708502s)
	I0207 21:25:26.498039    8152 oci.go:673] temporary error: container cilium-20220207210133-8704 status is  but expect it to be exited
	I0207 21:25:26.498148    8152 oci.go:679] Successfully shutdown container cilium-20220207210133-8704
	I0207 21:25:26.504957    8152 cli_runner.go:133] Run: docker rm -f -v cilium-20220207210133-8704
	I0207 21:25:31.516444    8152 cli_runner.go:186] Completed: docker rm -f -v cilium-20220207210133-8704: (5.0112982s)
	I0207 21:25:31.521256    8152 cli_runner.go:133] Run: docker container inspect -f {{.Id}} cilium-20220207210133-8704
	W0207 21:25:32.920377    8152 cli_runner.go:180] docker container inspect -f {{.Id}} cilium-20220207210133-8704 returned with exit code 1
	I0207 21:25:32.920377    8152 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} cilium-20220207210133-8704: (1.3991134s)
	I0207 21:25:32.926220    8152 cli_runner.go:133] Run: docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:25:34.796532    8152 cli_runner.go:180] docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:25:34.796532    8152 cli_runner.go:186] Completed: docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.8703024s)
	I0207 21:25:34.804988    8152 network_create.go:254] running [docker network inspect cilium-20220207210133-8704] to gather additional debugging logs...
	I0207 21:25:34.804988    8152 cli_runner.go:133] Run: docker network inspect cilium-20220207210133-8704
	W0207 21:25:36.320670    8152 cli_runner.go:180] docker network inspect cilium-20220207210133-8704 returned with exit code 1
	I0207 21:25:36.320730    8152 cli_runner.go:186] Completed: docker network inspect cilium-20220207210133-8704: (1.5156452s)
	I0207 21:25:36.320790    8152 network_create.go:257] error running [docker network inspect cilium-20220207210133-8704]: docker network inspect cilium-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220207210133-8704
	I0207 21:25:36.321058    8152 network_create.go:259] output of [docker network inspect cilium-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220207210133-8704
	
	** /stderr **
	W0207 21:25:36.322255    8152 delete.go:139] delete failed (probably ok) <nil>
	I0207 21:25:36.322255    8152 fix.go:120] Sleeping 1 second for extra luck!
	I0207 21:25:37.322832    8152 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:25:37.328185    8152 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:25:37.328185    8152 start.go:160] libmachine.API.Create for "cilium-20220207210133-8704" (driver="docker")
	I0207 21:25:37.328185    8152 client.go:168] LocalClient.Create starting
	I0207 21:25:37.329264    8152 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:25:37.329600    8152 main.go:130] libmachine: Decoding PEM data...
	I0207 21:25:37.329600    8152 main.go:130] libmachine: Parsing certificate...
	I0207 21:25:37.329874    8152 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:25:37.330237    8152 main.go:130] libmachine: Decoding PEM data...
	I0207 21:25:37.330264    8152 main.go:130] libmachine: Parsing certificate...
	I0207 21:25:37.345090    8152 cli_runner.go:133] Run: docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:25:38.647105    8152 cli_runner.go:180] docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:25:38.647105    8152 cli_runner.go:186] Completed: docker network inspect cilium-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3019039s)
	I0207 21:25:38.653109    8152 network_create.go:254] running [docker network inspect cilium-20220207210133-8704] to gather additional debugging logs...
	I0207 21:25:38.653109    8152 cli_runner.go:133] Run: docker network inspect cilium-20220207210133-8704
	W0207 21:25:40.008651    8152 cli_runner.go:180] docker network inspect cilium-20220207210133-8704 returned with exit code 1
	I0207 21:25:40.008651    8152 cli_runner.go:186] Completed: docker network inspect cilium-20220207210133-8704: (1.3554151s)
	I0207 21:25:40.008695    8152 network_create.go:257] error running [docker network inspect cilium-20220207210133-8704]: docker network inspect cilium-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220207210133-8704
	I0207 21:25:40.008742    8152 network_create.go:259] output of [docker network inspect cilium-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220207210133-8704
	
	** /stderr **
	I0207 21:25:40.016106    8152 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:25:41.357838    8152 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3416154s)
	I0207 21:25:41.379228    8152 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ad40a8] amended:true}} dirty:map[192.168.49.0:0xc000ad40a8 192.168.58.0:0xc000ad4a38] misses:0}
	I0207 21:25:41.379228    8152 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:25:41.379228    8152 network_create.go:106] attempt to create docker network cilium-20220207210133-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:25:41.384277    8152 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220207210133-8704
	I0207 21:25:44.076237    8152 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220207210133-8704: (2.6917617s)
	I0207 21:25:44.076314    8152 network_create.go:90] docker network cilium-20220207210133-8704 192.168.49.0/24 created
	I0207 21:25:44.076314    8152 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20220207210133-8704" container
	I0207 21:25:44.090576    8152 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:25:45.497579    8152 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.4068811s)
	I0207 21:25:45.505389    8152 cli_runner.go:133] Run: docker volume create cilium-20220207210133-8704 --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:25:46.990427    8152 cli_runner.go:186] Completed: docker volume create cilium-20220207210133-8704 --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true: (1.4850293s)
	I0207 21:25:46.990557    8152 oci.go:102] Successfully created a docker volume cilium-20220207210133-8704
	I0207 21:25:47.000016    8152 cli_runner.go:133] Run: docker run --rm --name cilium-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --entrypoint /usr/bin/test -v cilium-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:25:50.403627    8152 cli_runner.go:186] Completed: docker run --rm --name cilium-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --entrypoint /usr/bin/test -v cilium-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (3.4035928s)
	I0207 21:25:50.403627    8152 oci.go:106] Successfully prepared a docker volume cilium-20220207210133-8704
	I0207 21:25:50.403627    8152 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:25:50.403627    8152 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:25:50.409633    8152 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:26:35.191039    8152 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (44.7811065s)
	I0207 21:26:35.191260    8152 kic.go:188] duration metric: took 44.787277 seconds to extract preloaded images to volume
	I0207 21:26:35.207162    8152 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:26:37.694635    8152 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4866703s)
	I0207 21:26:37.695108    8152 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:67 OomKillDisable:true NGoroutines:71 SystemTime:2022-02-07 21:26:36.5482706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:26:37.703569    8152 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:26:40.213828    8152 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.5102452s)
	I0207 21:26:40.226717    8152 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.49.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:26:44.401145    8152 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.49.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:26:44.401145    8152 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.49.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (4.1744057s)
	I0207 21:26:44.401145    8152 client.go:171] LocalClient.Create took 1m7.0726051s
	I0207 21:26:46.411530    8152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:26:46.416337    8152 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704
	W0207 21:26:47.734531    8152 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704 returned with exit code 1
	I0207 21:26:47.734531    8152 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704: (1.3180671s)
	I0207 21:26:47.734805    8152 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:26:48.035527    8152 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704
	W0207 21:26:49.381549    8152 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704 returned with exit code 1
	I0207 21:26:49.381549    8152 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704: (1.3460143s)
	W0207 21:26:49.381549    8152 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:26:49.381549    8152 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:26:49.381549    8152 start.go:129] duration metric: createHost completed in 1m12.0583347s
	I0207 21:26:49.389414    8152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:26:49.395141    8152 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704
	W0207 21:26:50.764217    8152 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704 returned with exit code 1
	I0207 21:26:50.764217    8152 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704: (1.3690683s)
	I0207 21:26:50.764217    8152 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:26:51.002677    8152 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704
	W0207 21:26:52.355671    8152 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704 returned with exit code 1
	I0207 21:26:52.355671    8152 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220207210133-8704: (1.3529346s)
	W0207 21:26:52.355671    8152 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:26:52.355671    8152 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:26:52.355671    8152 fix.go:57] fixHost completed within 6m10.8791546s
	I0207 21:26:52.355671    8152 start.go:80] releasing machines lock for "cilium-20220207210133-8704", held for 6m10.879356s
	W0207 21:26:52.355671    8152 out.go:241] * Failed to start docker container. Running "minikube delete -p cilium-20220207210133-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.49.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v
0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	352e1f3b7d7852f7938216015a8688b8726c23ddebeba9ee33d6a4a29f8cd1d7
	
	stderr:
	docker: Error response from daemon: network cilium-20220207210133-8704 not found.
	
	* Failed to start docker container. Running "minikube delete -p cilium-20220207210133-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.49.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c4
5fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	352e1f3b7d7852f7938216015a8688b8726c23ddebeba9ee33d6a4a29f8cd1d7
	
	stderr:
	docker: Error response from daemon: network cilium-20220207210133-8704 not found.
	
	I0207 21:26:52.810969    8152 out.go:176] 
	W0207 21:26:52.811701    8152 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.49.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8
936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	352e1f3b7d7852f7938216015a8688b8726c23ddebeba9ee33d6a4a29f8cd1d7
	
	stderr:
	docker: Error response from daemon: network cilium-20220207210133-8704 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220207210133-8704 --name cilium-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220207210133-8704 --network cilium-20220207210133-8704 --ip 192.168.49.2 --volume cilium-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f
5b4a8: exit status 125
	stdout:
	352e1f3b7d7852f7938216015a8688b8726c23ddebeba9ee33d6a4a29f8cd1d7
	
	stderr:
	docker: Error response from daemon: network cilium-20220207210133-8704 not found.
	
	W0207 21:26:52.811784    8152 out.go:241] * 
	* 
	W0207 21:26:52.813264    8152 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 21:26:53.063803    8152 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (461.56s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.02s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.9865275s)
pause_test.go:169: (dbg) Run:  docker ps -a
E0207 21:19:41.520482    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
pause_test.go:169: (dbg) Done: docker ps -a: (1.3413383s)
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220207211356-8704
pause_test.go:174: (dbg) Done: docker volume inspect pause-20220207211356-8704: (1.2205374s)
pause_test.go:176: expected to see error and volume "docker volume inspect pause-20220207211356-8704" to not exist after deletion but got no error and this output: 
-- stdout --
	[
	    {
	        "CreatedAt": "2022-02-07T21:15:25Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20220207211356-8704"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20220207211356-8704/_data",
	        "Name": "pause-20220207211356-8704",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
pause_test.go:179: (dbg) Run:  docker network ls
pause_test.go:179: (dbg) Done: docker network ls: (1.2238125s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20220207211356-8704
helpers_test.go:232: (dbg) Done: docker inspect pause-20220207211356-8704: (1.2744046s)
helpers_test.go:236: (dbg) docker inspect pause-20220207211356-8704:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2022-02-07T21:15:25Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20220207211356-8704"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20220207211356-8704/_data",
	        "Name": "pause-20220207211356-8704",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220207211356-8704 -n pause-20220207211356-8704
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220207211356-8704 -n pause-20220207211356-8704: exit status 85 (315.8943ms)

                                                
                                                
-- stdout --
	* Profile "pause-20220207211356-8704" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20220207211356-8704"

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 85 (may be ok)
helpers_test.go:242: "pause-20220207211356-8704" host is not running, skipping log retrieval (state="* Profile \"pause-20220207211356-8704\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20220207211356-8704\"")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20220207211356-8704
helpers_test.go:232: (dbg) Done: docker inspect pause-20220207211356-8704: (1.3109398s)
helpers_test.go:236: (dbg) docker inspect pause-20220207211356-8704:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2022-02-07T21:15:25Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20220207211356-8704"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20220207211356-8704/_data",
	        "Name": "pause-20220207211356-8704",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220207211356-8704 -n pause-20220207211356-8704
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220207211356-8704 -n pause-20220207211356-8704: exit status 85 (303.7603ms)

                                                
                                                
-- stdout --
	* Profile "pause-20220207211356-8704" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20220207211356-8704"

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 85 (may be ok)
helpers_test.go:242: "pause-20220207211356-8704" host is not running, skipping log retrieval (state="* Profile \"pause-20220207211356-8704\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20220207211356-8704\"")
--- FAIL: TestPause/serial/VerifyDeletedResources (14.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (489.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: exit status 80 (8m9.0850444s)

                                                
                                                
-- stdout --
	* [custom-weave-20220207210133-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node custom-weave-20220207210133-8704 in cluster custom-weave-20220207210133-8704
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "custom-weave-20220207210133-8704" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 21:26:03.517417    1820 out.go:297] Setting OutFile to fd 1628 ...
	I0207 21:26:03.606023    1820 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:26:03.606023    1820 out.go:310] Setting ErrFile to fd 1824...
	I0207 21:26:03.606023    1820 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:26:03.617996    1820 out.go:304] Setting JSON to false
	I0207 21:26:03.621986    1820 start.go:112] hostinfo: {"hostname":"minikube3","uptime":436782,"bootTime":1643832381,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:26:03.621986    1820 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:26:03.625982    1820 out.go:176] * [custom-weave-20220207210133-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:26:03.625982    1820 notify.go:174] Checking for updates...
	I0207 21:26:03.629994    1820 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:26:03.631988    1820 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:26:03.635987    1820 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:26:03.637987    1820 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:26:03.638995    1820 config.go:176] Loaded profile config "auto-20220207210111-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:26:03.639985    1820 config.go:176] Loaded profile config "cilium-20220207210133-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:26:03.639985    1820 config.go:176] Loaded profile config "false-20220207210133-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:26:03.639985    1820 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:26:06.510297    1820 docker.go:132] docker version: linux-20.10.12
	I0207 21:26:06.515312    1820 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:26:08.922193    1820 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4068677s)
	I0207 21:26:08.922995    1820 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:66 SystemTime:2022-02-07 21:26:07.7650664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:26:09.168230    1820 out.go:176] * Using the docker driver based on user configuration
	I0207 21:26:09.168928    1820 start.go:281] selected driver: docker
	I0207 21:26:09.168928    1820 start.go:798] validating driver "docker" against <nil>
	I0207 21:26:09.169060    1820 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:26:09.295583    1820 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:26:11.738396    1820 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4428003s)
	I0207 21:26:11.738930    1820 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:67 OomKillDisable:true NGoroutines:67 SystemTime:2022-02-07 21:26:10.5606578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:26:11.739285    1820 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 21:26:11.740093    1820 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 21:26:11.740209    1820 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 21:26:11.740235    1820 cni.go:93] Creating CNI manager for "testdata\\weavenet.yaml"
	I0207 21:26:11.740235    1820 start_flags.go:297] Found "testdata\\weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0207 21:26:11.740235    1820 start_flags.go:302] config:
	{Name:custom-weave-20220207210133-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220207210133-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:26:11.812830    1820 out.go:176] * Starting control plane node custom-weave-20220207210133-8704 in cluster custom-weave-20220207210133-8704
	I0207 21:26:11.812830    1820 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:26:11.817619    1820 out.go:176] * Pulling base image ...
	I0207 21:26:11.817957    1820 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:26:11.817957    1820 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:26:11.818222    1820 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 21:26:11.818222    1820 cache.go:57] Caching tarball of preloaded images
	I0207 21:26:11.818923    1820 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:26:11.819555    1820 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 21:26:11.819715    1820 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-weave-20220207210133-8704\config.json ...
	I0207 21:26:11.819715    1820 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-weave-20220207210133-8704\config.json: {Name:mkb5d5d6f70e57c515709d567b9d3ce19517e2ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:26:13.121261    1820 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:26:13.121261    1820 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:26:13.121326    1820 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:26:13.121604    1820 start.go:313] acquiring machines lock for custom-weave-20220207210133-8704: {Name:mk232b23212695fd5d65528d6e1788f190976f8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:26:13.121817    1820 start.go:317] acquired machines lock for "custom-weave-20220207210133-8704" in 169.9µs
	I0207 21:26:13.121926    1820 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20220207210133-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220207210133-8704 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:26:13.122192    1820 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:26:13.567769    1820 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:26:13.963394    1820 start.go:160] libmachine.API.Create for "custom-weave-20220207210133-8704" (driver="docker")
	I0207 21:26:13.963533    1820 client.go:168] LocalClient.Create starting
	I0207 21:26:13.964188    1820 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:26:13.964422    1820 main.go:130] libmachine: Decoding PEM data...
	I0207 21:26:13.964502    1820 main.go:130] libmachine: Parsing certificate...
	I0207 21:26:13.964580    1820 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:26:13.964580    1820 main.go:130] libmachine: Decoding PEM data...
	I0207 21:26:13.964580    1820 main.go:130] libmachine: Parsing certificate...
	I0207 21:26:13.975380    1820 cli_runner.go:133] Run: docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:26:15.283880    1820 cli_runner.go:180] docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:26:15.283880    1820 cli_runner.go:186] Completed: docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3084928s)
	I0207 21:26:15.288868    1820 network_create.go:254] running [docker network inspect custom-weave-20220207210133-8704] to gather additional debugging logs...
	I0207 21:26:15.288868    1820 cli_runner.go:133] Run: docker network inspect custom-weave-20220207210133-8704
	W0207 21:26:16.630878    1820 cli_runner.go:180] docker network inspect custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:26:16.630878    1820 cli_runner.go:186] Completed: docker network inspect custom-weave-20220207210133-8704: (1.342003s)
	I0207 21:26:16.630878    1820 network_create.go:257] error running [docker network inspect custom-weave-20220207210133-8704]: docker network inspect custom-weave-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220207210133-8704
	I0207 21:26:16.630878    1820 network_create.go:259] output of [docker network inspect custom-weave-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220207210133-8704
	
	** /stderr **
	I0207 21:26:16.635871    1820 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:26:17.929345    1820 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2933768s)
	I0207 21:26:17.954204    1820 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005222b0] misses:0}
	I0207 21:26:17.954800    1820 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:26:17.954832    1820 network_create.go:106] attempt to create docker network custom-weave-20220207210133-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:26:17.960755    1820 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704
	W0207 21:26:21.840238    1820 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:26:21.840238    1820 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704: (3.8794626s)
	W0207 21:26:21.840238    1820 network_create.go:98] failed to create docker network custom-weave-20220207210133-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:26:21.860531    1820 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0] amended:false}} dirty:map[] misses:0}
	I0207 21:26:21.860531    1820 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:26:21.880075    1820 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0] amended:true}} dirty:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228] misses:0}
	I0207 21:26:21.880679    1820 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:26:21.880679    1820 network_create.go:106] attempt to create docker network custom-weave-20220207210133-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:26:21.885451    1820 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704
	W0207 21:26:23.162418    1820 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:26:23.162418    1820 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704: (1.2768613s)
	W0207 21:26:23.162418    1820 network_create.go:98] failed to create docker network custom-weave-20220207210133-8704 192.168.58.0/24, will retry: subnet is taken
	I0207 21:26:23.181576    1820 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0] amended:true}} dirty:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228] misses:1}
	I0207 21:26:23.181576    1820 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:26:23.199774    1820 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0] amended:true}} dirty:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228 192.168.67.0:0xc000d162d0] misses:1}
	I0207 21:26:23.199774    1820 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:26:23.199774    1820 network_create.go:106] attempt to create docker network custom-weave-20220207210133-8704 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0207 21:26:23.205699    1820 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704
	I0207 21:26:26.698260    1820 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704: (3.4924068s)
	I0207 21:26:26.698427    1820 network_create.go:90] docker network custom-weave-20220207210133-8704 192.168.67.0/24 created
	I0207 21:26:26.698427    1820 kic.go:106] calculated static IP "192.168.67.2" for the "custom-weave-20220207210133-8704" container
	I0207 21:26:26.713512    1820 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:26:28.117658    1820 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.4041386s)
	I0207 21:26:28.124264    1820 cli_runner.go:133] Run: docker volume create custom-weave-20220207210133-8704 --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:26:32.449515    1820 cli_runner.go:186] Completed: docker volume create custom-weave-20220207210133-8704 --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true: (4.3252273s)
	I0207 21:26:32.449717    1820 oci.go:102] Successfully created a docker volume custom-weave-20220207210133-8704
	I0207 21:26:32.456313    1820 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --entrypoint /usr/bin/test -v custom-weave-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:26:37.382968    1820 cli_runner.go:186] Completed: docker run --rm --name custom-weave-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --entrypoint /usr/bin/test -v custom-weave-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (4.9266296s)
	I0207 21:26:37.382968    1820 oci.go:106] Successfully prepared a docker volume custom-weave-20220207210133-8704
	I0207 21:26:37.382968    1820 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:26:37.382968    1820 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:26:37.387745    1820 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:27:32.146039    1820 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (54.7580048s)
	I0207 21:27:32.146039    1820 kic.go:188] duration metric: took 54.762781 seconds to extract preloaded images to volume
	I0207 21:27:32.153077    1820 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:27:34.597692    1820 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4446022s)
	I0207 21:27:34.597692    1820 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:66 OomKillDisable:true NGoroutines:64 SystemTime:2022-02-07 21:27:33.4486319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:27:34.602720    1820 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:27:36.937774    1820 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.335042s)
	I0207 21:27:36.947736    1820 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:27:41.362626    1820 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:27:41.362626    1820 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (4.4148667s)
	I0207 21:27:41.362626    1820 client.go:171] LocalClient.Create took 1m27.3986313s
	I0207 21:27:43.373163    1820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:27:43.379985    1820 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704
	W0207 21:27:44.727195    1820 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:27:44.727195    1820 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704: (1.3472035s)
	I0207 21:27:44.727195    1820 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:27:45.008492    1820 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704
	W0207 21:27:46.460551    1820 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:27:46.460551    1820 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704: (1.4520509s)
	W0207 21:27:46.460551    1820 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:27:46.460551    1820 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:27:46.460551    1820 start.go:129] duration metric: createHost completed in 1m33.337866s
	I0207 21:27:46.460551    1820 start.go:80] releasing machines lock for "custom-weave-20220207210133-8704", held for 1m33.338174s
	W0207 21:27:46.460551    1820 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f6
58fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220207210133-8704 not found.
	I0207 21:27:46.474490    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:27:47.846325    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3718269s)
	W0207 21:27:47.846325    1820 start.go:575] delete host: Docker machine "custom-weave-20220207210133-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0207 21:27:47.846325    1820 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d5
33c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220207210133-8704 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8
ae15cbe69f5b4a8: exit status 125
	stdout:
	e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220207210133-8704 not found.
	
	I0207 21:27:47.846325    1820 start.go:585] Will try again in 5 seconds ...
	I0207 21:27:52.848528    1820 start.go:313] acquiring machines lock for custom-weave-20220207210133-8704: {Name:mk232b23212695fd5d65528d6e1788f190976f8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:27:52.848528    1820 start.go:317] acquired machines lock for "custom-weave-20220207210133-8704" in 0s
	I0207 21:27:52.848528    1820 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:27:52.848528    1820 fix.go:55] fixHost starting: 
	I0207 21:27:52.859180    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:27:54.149069    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.2898822s)
	I0207 21:27:54.149069    1820 fix.go:108] recreateIfNeeded on custom-weave-20220207210133-8704: state= err=<nil>
	I0207 21:27:54.149069    1820 fix.go:113] machineExists: false. err=machine does not exist
	I0207 21:27:54.155056    1820 out.go:176] * docker "custom-weave-20220207210133-8704" container is missing, will recreate.
	I0207 21:27:54.155056    1820 delete.go:124] DEMOLISHING custom-weave-20220207210133-8704 ...
	I0207 21:27:54.169066    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:27:55.482015    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3129417s)
	I0207 21:27:55.482015    1820 stop.go:79] host is in state 
	I0207 21:27:55.482015    1820 main.go:130] libmachine: Stopping "custom-weave-20220207210133-8704"...
	I0207 21:27:55.493017    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:27:56.784193    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.2911693s)
	I0207 21:27:56.797200    1820 kic_runner.go:93] Run: systemctl --version
	I0207 21:27:56.797200    1820 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220207210133-8704 systemctl --version]
	I0207 21:27:58.315690    1820 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:27:58.315690    1820 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220207210133-8704 sudo service kubelet stop]
	I0207 21:27:59.858650    1820 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09 is not running
	
	** /stderr **
	W0207 21:27:59.858650    1820 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09 is not running
	I0207 21:27:59.871649    1820 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:27:59.871649    1820 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220207210133-8704 sudo service kubelet stop]
	I0207 21:28:01.361058    1820 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09 is not running
	
	** /stderr **
	W0207 21:28:01.361058    1820 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09 is not running
	I0207 21:28:01.371556    1820 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 21:28:01.371556    1820 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220207210133-8704 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 21:28:02.949960    1820 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09 is not running
	I0207 21:28:02.950164    1820 kic.go:466] successfully stopped kubernetes!
	I0207 21:28:02.971047    1820 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 21:28:02.971047    1820 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220207210133-8704 pgrep kube-apiserver]
	I0207 21:28:06.366035    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:07.921625    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.5545773s)
	I0207 21:28:10.933769    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:12.354638    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4207797s)
	I0207 21:28:15.367331    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:16.730276    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3629375s)
	I0207 21:28:19.744728    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:21.097664    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3529289s)
	I0207 21:28:24.110296    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:25.468747    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3584439s)
	I0207 21:28:28.484027    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:29.834408    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3503318s)
	I0207 21:28:32.846267    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:34.216380    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3701058s)
	I0207 21:28:37.230046    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:38.639598    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4095439s)
	I0207 21:28:41.839755    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:43.258406    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4186434s)
	I0207 21:28:46.276076    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:47.882971    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.6068861s)
	I0207 21:28:50.894667    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:52.334218    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4395433s)
	I0207 21:28:55.353540    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:28:56.770071    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4165231s)
	I0207 21:28:59.784199    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:01.092913    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3086685s)
	I0207 21:29:04.106909    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:05.419293    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.312177s)
	I0207 21:29:08.445209    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:09.818267    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3729451s)
	I0207 21:29:12.829733    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:14.199543    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.369647s)
	I0207 21:29:17.215863    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:18.660344    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4444737s)
	I0207 21:29:21.674495    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:23.019176    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3443195s)
	I0207 21:29:26.031913    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:27.396190    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3642242s)
	I0207 21:29:30.409969    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:31.838364    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4283877s)
	I0207 21:29:34.855324    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:36.182509    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3270642s)
	I0207 21:29:39.198393    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:40.504603    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3061001s)
	I0207 21:29:43.516567    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:44.941997    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4252942s)
	I0207 21:29:47.958991    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:49.390816    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4316184s)
	I0207 21:29:52.402652    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:53.747109    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3444501s)
	I0207 21:29:56.765869    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:29:58.157013    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3911365s)
	I0207 21:30:01.178791    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:02.598272    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4184747s)
	I0207 21:30:05.610582    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:06.921195    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3104569s)
	I0207 21:30:09.946110    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:11.469705    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.5235869s)
	I0207 21:30:14.484549    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:16.258002    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.7734435s)
	I0207 21:30:19.272032    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:20.684054    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4120141s)
	I0207 21:30:23.697520    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:25.053336    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3558094s)
	I0207 21:30:28.069638    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:29.369762    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3001163s)
	I0207 21:30:32.383275    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:33.708929    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3253366s)
	I0207 21:30:36.775857    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:38.174183    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.398318s)
	I0207 21:30:41.188468    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:42.506335    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3177952s)
	I0207 21:30:45.519217    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:46.875987    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3566509s)
	I0207 21:30:49.891596    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:51.241606    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3500028s)
	I0207 21:30:54.255757    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:30:55.693801    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4380359s)
	I0207 21:30:58.710945    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:00.126949    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4159967s)
	I0207 21:31:03.144986    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:04.505250    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.360149s)
	I0207 21:31:07.519423    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:08.887659    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3681849s)
	I0207 21:31:11.900250    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:13.281401    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3808853s)
	I0207 21:31:16.293755    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:17.758137    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4642495s)
	I0207 21:31:20.774389    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:22.178447    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4038206s)
	I0207 21:31:25.191184    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:26.572457    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3810708s)
	I0207 21:31:29.586283    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:30.847266    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.2609761s)
	I0207 21:31:33.864592    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:35.233017    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3683393s)
	I0207 21:31:38.246645    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:39.595522    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3487845s)
	I0207 21:31:42.607294    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:43.980392    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3730904s)
	I0207 21:31:46.998829    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:48.338568    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3397324s)
	I0207 21:31:51.351758    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:52.737365    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3854292s)
	I0207 21:31:55.758191    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:31:57.168673    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4104105s)
	I0207 21:32:00.185398    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:01.510080    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3244883s)
	I0207 21:32:04.523801    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:05.843707    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3197874s)
	I0207 21:32:08.857384    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:10.309030    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4515647s)
	I0207 21:32:13.327945    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:14.801519    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.4721495s)
	I0207 21:32:17.812881    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:19.113765    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.300808s)
	I0207 21:32:22.126849    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:23.472486    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3455492s)
	I0207 21:32:26.485488    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:28.060304    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.574807s)
	I0207 21:32:31.061894    1820 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0207 21:32:31.061894    1820 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 21:32:31.076539    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:32.471673    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3950049s)
	W0207 21:32:32.471753    1820 delete.go:135] deletehost failed: Docker machine "custom-weave-20220207210133-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 21:32:32.476599    1820 cli_runner.go:133] Run: docker container inspect -f {{.Id}} custom-weave-20220207210133-8704
	I0207 21:32:33.902909    1820 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} custom-weave-20220207210133-8704: (1.4263018s)
	I0207 21:32:33.907901    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:35.242251    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3343425s)
	I0207 21:32:35.253099    1820 cli_runner.go:133] Run: docker exec --privileged -t custom-weave-20220207210133-8704 /bin/bash -c "sudo init 0"
	W0207 21:32:36.789249    1820 cli_runner.go:180] docker exec --privileged -t custom-weave-20220207210133-8704 /bin/bash -c "sudo init 0" returned with exit code 1
	I0207 21:32:36.789249    1820 cli_runner.go:186] Completed: docker exec --privileged -t custom-weave-20220207210133-8704 /bin/bash -c "sudo init 0": (1.5361409s)
	I0207 21:32:36.789249    1820 oci.go:659] error shutdown custom-weave-20220207210133-8704: docker exec --privileged -t custom-weave-20220207210133-8704 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e73e52b900d3e4ea7fd98c356059dd117b7b9e312fdd0a544d0e8f6055f7eb09 is not running
	I0207 21:32:37.799648    1820 cli_runner.go:133] Run: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}
	I0207 21:32:39.135301    1820 cli_runner.go:186] Completed: docker container inspect custom-weave-20220207210133-8704 --format={{.State.Status}}: (1.3356459s)
	I0207 21:32:39.135301    1820 oci.go:673] temporary error: container custom-weave-20220207210133-8704 status is  but expect it to be exited
	I0207 21:32:39.135301    1820 oci.go:679] Successfully shutdown container custom-weave-20220207210133-8704
	I0207 21:32:39.141300    1820 cli_runner.go:133] Run: docker rm -f -v custom-weave-20220207210133-8704
	I0207 21:32:42.120910    1820 cli_runner.go:186] Completed: docker rm -f -v custom-weave-20220207210133-8704: (2.9785749s)
	I0207 21:32:42.125910    1820 cli_runner.go:133] Run: docker container inspect -f {{.Id}} custom-weave-20220207210133-8704
	W0207 21:32:43.471800    1820 cli_runner.go:180] docker container inspect -f {{.Id}} custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:32:43.471800    1820 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} custom-weave-20220207210133-8704: (1.3458834s)
	I0207 21:32:43.478848    1820 cli_runner.go:133] Run: docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:32:44.846086    1820 cli_runner.go:180] docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:32:44.846086    1820 cli_runner.go:186] Completed: docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3671499s)
	I0207 21:32:44.851859    1820 network_create.go:254] running [docker network inspect custom-weave-20220207210133-8704] to gather additional debugging logs...
	I0207 21:32:44.851859    1820 cli_runner.go:133] Run: docker network inspect custom-weave-20220207210133-8704
	W0207 21:32:46.285318    1820 cli_runner.go:180] docker network inspect custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:32:46.285579    1820 cli_runner.go:186] Completed: docker network inspect custom-weave-20220207210133-8704: (1.4334511s)
	I0207 21:32:46.285645    1820 network_create.go:257] error running [docker network inspect custom-weave-20220207210133-8704]: docker network inspect custom-weave-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220207210133-8704
	I0207 21:32:46.285738    1820 network_create.go:259] output of [docker network inspect custom-weave-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220207210133-8704
	
	** /stderr **
	W0207 21:32:46.287222    1820 delete.go:139] delete failed (probably ok) <nil>
	I0207 21:32:46.287284    1820 fix.go:120] Sleeping 1 second for extra luck!
	I0207 21:32:47.288745    1820 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:32:47.295439    1820 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:32:47.295439    1820 start.go:160] libmachine.API.Create for "custom-weave-20220207210133-8704" (driver="docker")
	I0207 21:32:47.295439    1820 client.go:168] LocalClient.Create starting
	I0207 21:32:47.296082    1820 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:32:47.296622    1820 main.go:130] libmachine: Decoding PEM data...
	I0207 21:32:47.296699    1820 main.go:130] libmachine: Parsing certificate...
	I0207 21:32:47.297002    1820 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:32:47.297212    1820 main.go:130] libmachine: Decoding PEM data...
	I0207 21:32:47.297212    1820 main.go:130] libmachine: Parsing certificate...
	I0207 21:32:47.304400    1820 cli_runner.go:133] Run: docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:32:48.635613    1820 cli_runner.go:180] docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:32:48.635613    1820 cli_runner.go:186] Completed: docker network inspect custom-weave-20220207210133-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3312062s)
	I0207 21:32:48.646596    1820 network_create.go:254] running [docker network inspect custom-weave-20220207210133-8704] to gather additional debugging logs...
	I0207 21:32:48.646596    1820 cli_runner.go:133] Run: docker network inspect custom-weave-20220207210133-8704
	W0207 21:32:50.268549    1820 cli_runner.go:180] docker network inspect custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:32:50.268549    1820 cli_runner.go:186] Completed: docker network inspect custom-weave-20220207210133-8704: (1.6219443s)
	I0207 21:32:50.268549    1820 network_create.go:257] error running [docker network inspect custom-weave-20220207210133-8704]: docker network inspect custom-weave-20220207210133-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220207210133-8704
	I0207 21:32:50.268549    1820 network_create.go:259] output of [docker network inspect custom-weave-20220207210133-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220207210133-8704
	
	** /stderr **
	I0207 21:32:50.277497    1820 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:32:51.930342    1820 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.6528368s)
	I0207 21:32:51.961315    1820 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0] amended:true}} dirty:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228 192.168.67.0:0xc000d162d0] misses:1}
	I0207 21:32:51.962336    1820 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:32:51.962336    1820 network_create.go:106] attempt to create docker network custom-weave-20220207210133-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:32:51.969333    1820 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704
	W0207 21:32:53.493125    1820 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:32:53.493125    1820 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704: (1.5237846s)
	W0207 21:32:53.493125    1820 network_create.go:98] failed to create docker network custom-weave-20220207210133-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:32:53.510116    1820 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0] amended:true}} dirty:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228 192.168.67.0:0xc000d162d0] misses:1}
	I0207 21:32:53.510116    1820 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:32:53.532299    1820 network.go:284] reusing subnet 192.168.58.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0] amended:true}} dirty:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228 192.168.67.0:0xc000d162d0] misses:2}
	I0207 21:32:53.532299    1820 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:32:53.532474    1820 network_create.go:106] attempt to create docker network custom-weave-20220207210133-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:32:53.541196    1820 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704
	W0207 21:32:54.961961    1820 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:32:54.961961    1820 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704: (1.4207573s)
	W0207 21:32:54.961961    1820 network_create.go:98] failed to create docker network custom-weave-20220207210133-8704 192.168.58.0/24, will retry: subnet is taken
	I0207 21:32:54.981067    1820 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228 192.168.67.0:0xc000d162d0] amended:false}} dirty:map[] misses:0}
	I0207 21:32:54.981067    1820 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:32:55.000116    1820 network.go:284] reusing subnet 192.168.67.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005222b0 192.168.58.0:0xc000d16228 192.168.67.0:0xc000d162d0] amended:false}} dirty:map[] misses:0}
	I0207 21:32:55.000116    1820 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:32:55.000116    1820 network_create.go:106] attempt to create docker network custom-weave-20220207210133-8704 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0207 21:32:55.005104    1820 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704
	I0207 21:32:57.503733    1820 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207210133-8704: (2.4984863s)
	I0207 21:32:57.503733    1820 network_create.go:90] docker network custom-weave-20220207210133-8704 192.168.67.0/24 created
	I0207 21:32:57.503839    1820 kic.go:106] calculated static IP "192.168.67.2" for the "custom-weave-20220207210133-8704" container
	I0207 21:32:57.519425    1820 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:32:58.906037    1820 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.386521s)
	I0207 21:32:58.913830    1820 cli_runner.go:133] Run: docker volume create custom-weave-20220207210133-8704 --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:33:00.296384    1820 cli_runner.go:186] Completed: docker volume create custom-weave-20220207210133-8704 --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true: (1.3825464s)
	I0207 21:33:00.296540    1820 oci.go:102] Successfully created a docker volume custom-weave-20220207210133-8704
	I0207 21:33:00.302631    1820 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --entrypoint /usr/bin/test -v custom-weave-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:33:07.806222    1820 cli_runner.go:186] Completed: docker run --rm --name custom-weave-20220207210133-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --entrypoint /usr/bin/test -v custom-weave-20220207210133-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (7.5035523s)
	I0207 21:33:07.806222    1820 oci.go:106] Successfully prepared a docker volume custom-weave-20220207210133-8704
	I0207 21:33:07.806222    1820 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:33:07.806222    1820 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:33:07.811222    1820 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:33:57.053621    1820 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220207210133-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (49.2421448s)
	I0207 21:33:57.053621    1820 kic.go:188] duration metric: took 49.247144 seconds to extract preloaded images to volume
	I0207 21:33:57.058303    1820 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:33:59.487956    1820 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4296405s)
	I0207 21:33:59.488368    1820 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:77 OomKillDisable:true NGoroutines:70 SystemTime:2022-02-07 21:33:58.3685231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:33:59.493933    1820 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:34:01.833912    1820 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.3399673s)
	I0207 21:34:01.839925    1820 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:34:04.239784    1820 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 21:34:04.239784    1820 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: (2.3998472s)
	I0207 21:34:04.239784    1820 client.go:171] LocalClient.Create took 1m16.9439471s
	I0207 21:34:06.250875    1820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:34:06.256887    1820 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704
	W0207 21:34:07.707395    1820 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:34:07.707395    1820 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704: (1.4505009s)
	I0207 21:34:07.707395    1820 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:08.004167    1820 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704
	W0207 21:34:09.432914    1820 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:34:09.432914    1820 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704: (1.4287393s)
	W0207 21:34:09.432914    1820 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:34:09.432914    1820 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:09.432914    1820 start.go:129] duration metric: createHost completed in 1m22.1436779s
	I0207 21:34:09.440906    1820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:34:09.446913    1820 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704
	W0207 21:34:10.720002    1820 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:34:10.720002    1820 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704: (1.2730821s)
	I0207 21:34:10.720002    1820 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:10.957981    1820 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704
	W0207 21:34:12.344488    1820 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704 returned with exit code 1
	I0207 21:34:12.344793    1820 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207210133-8704: (1.3865004s)
	W0207 21:34:12.344793    1820 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:34:12.344793    1820 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:12.344793    1820 fix.go:57] fixHost completed within 6m19.4942561s
	I0207 21:34:12.344793    1820 start.go:80] releasing machines lock for "custom-weave-20220207210133-8704", held for 6m19.4942561s
	W0207 21:34:12.345539    1820 out.go:241] * Failed to start docker container. Running "minikube delete -p custom-weave-20220207210133-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::
32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	840d21c43801ee9129c9974a82a8b1ce37a817b713ba9548a9dc80f513497352
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220207210133-8704 not found.
	
	* Failed to start docker container. Running "minikube delete -p custom-weave-20220207210133-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v
0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	840d21c43801ee9129c9974a82a8b1ce37a817b713ba9548a9dc80f513497352
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220207210133-8704 not found.
	
	I0207 21:34:12.353406    1820 out.go:176] 
	W0207 21:34:12.353406    1820 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-16438
23806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	840d21c43801ee9129c9974a82a8b1ce37a817b713ba9548a9dc80f513497352
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220207210133-8704 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207210133-8704 --name custom-weave-20220207210133-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207210133-8704 --network custom-weave-20220207210133-8704 --ip 192.168.67.2 --volume custom-weave-20220207210133-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f
658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	840d21c43801ee9129c9974a82a8b1ce37a817b713ba9548a9dc80f513497352
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220207210133-8704 not found.
	
	W0207 21:34:12.353406    1820 out.go:241] * 
	* 
	W0207 21:34:12.354883    1820 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 21:34:12.357962    1820 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (489.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (466.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: exit status 80 (7m46.560718s)

                                                
                                                
-- stdout --
	* [enable-default-cni-20220207210111-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node enable-default-cni-20220207210111-8704 in cluster enable-default-cni-20220207210111-8704
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-20220207210111-8704" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 21:26:30.397490    9312 out.go:297] Setting OutFile to fd 2020 ...
	I0207 21:26:30.455602    9312 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:26:30.455602    9312 out.go:310] Setting ErrFile to fd 2008...
	I0207 21:26:30.455774    9312 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:26:30.473460    9312 out.go:304] Setting JSON to false
	I0207 21:26:30.476712    9312 start.go:112] hostinfo: {"hostname":"minikube3","uptime":436809,"bootTime":1643832381,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:26:30.476712    9312 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:26:30.975029    9312 out.go:176] * [enable-default-cni-20220207210111-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:26:30.975827    9312 notify.go:174] Checking for updates...
	I0207 21:26:31.274622    9312 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:26:31.385385    9312 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:26:31.389494    9312 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:26:31.392786    9312 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:26:31.394214    9312 config.go:176] Loaded profile config "cilium-20220207210133-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:26:31.394214    9312 config.go:176] Loaded profile config "custom-weave-20220207210133-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:26:31.395086    9312 config.go:176] Loaded profile config "false-20220207210133-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 21:26:31.395086    9312 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:26:34.050767    9312 docker.go:132] docker version: linux-20.10.12
	I0207 21:26:34.057552    9312 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:26:36.393461    9312 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.3358964s)
	I0207 21:26:36.393461    9312 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:57 SystemTime:2022-02-07 21:26:35.1753084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:26:36.399443    9312 out.go:176] * Using the docker driver based on user configuration
	I0207 21:26:36.399443    9312 start.go:281] selected driver: docker
	I0207 21:26:36.399443    9312 start.go:798] validating driver "docker" against <nil>
	I0207 21:26:36.399443    9312 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:26:36.466437    9312 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:26:38.865734    9312 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.3992844s)
	I0207 21:26:38.865734    9312 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:55 SystemTime:2022-02-07 21:26:37.7596897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:26:38.865734    9312 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 21:26:38.866993    9312 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	E0207 21:26:38.866993    9312 start_flags.go:440] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0207 21:26:38.867212    9312 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 21:26:38.867298    9312 cni.go:93] Creating CNI manager for "bridge"
	I0207 21:26:38.867361    9312 start_flags.go:297] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0207 21:26:38.867361    9312 start_flags.go:302] config:
	{Name:enable-default-cni-20220207210111-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:enable-default-cni-20220207210111-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:26:38.875835    9312 out.go:176] * Starting control plane node enable-default-cni-20220207210111-8704 in cluster enable-default-cni-20220207210111-8704
	I0207 21:26:38.875835    9312 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:26:38.879551    9312 out.go:176] * Pulling base image ...
	I0207 21:26:38.879551    9312 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:26:38.879551    9312 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:26:38.880493    9312 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 21:26:38.880569    9312 cache.go:57] Caching tarball of preloaded images
	I0207 21:26:38.880825    9312 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:26:38.881246    9312 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 21:26:38.881432    9312 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-20220207210111-8704\config.json ...
	I0207 21:26:38.881515    9312 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-20220207210111-8704\config.json: {Name:mka21c5a9039ee7b4eed00474ca7d6a8dc0cdae8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:26:40.324742    9312 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:26:40.324907    9312 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:26:40.324907    9312 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:26:40.324971    9312 start.go:313] acquiring machines lock for enable-default-cni-20220207210111-8704: {Name:mka4390d5a4f387ca7899d1e09b326a8e75a7707 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:26:40.324971    9312 start.go:317] acquired machines lock for "enable-default-cni-20220207210111-8704" in 0s
	I0207 21:26:40.324971    9312 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20220207210111-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:enable-default-cni-20220207210111-8704 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:26:40.325694    9312 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:26:40.329610    9312 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:26:40.330132    9312 start.go:160] libmachine.API.Create for "enable-default-cni-20220207210111-8704" (driver="docker")
	I0207 21:26:40.330200    9312 client.go:168] LocalClient.Create starting
	I0207 21:26:40.330552    9312 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:26:40.330552    9312 main.go:130] libmachine: Decoding PEM data...
	I0207 21:26:40.330552    9312 main.go:130] libmachine: Parsing certificate...
	I0207 21:26:40.331270    9312 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:26:40.331637    9312 main.go:130] libmachine: Decoding PEM data...
	I0207 21:26:40.331694    9312 main.go:130] libmachine: Parsing certificate...
	I0207 21:26:40.339518    9312 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:26:41.685388    9312 cli_runner.go:180] docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:26:41.685388    9312 cli_runner.go:186] Completed: docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3458632s)
	I0207 21:26:41.692080    9312 network_create.go:254] running [docker network inspect enable-default-cni-20220207210111-8704] to gather additional debugging logs...
	I0207 21:26:41.692135    9312 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207210111-8704
	W0207 21:26:42.999440    9312 cli_runner.go:180] docker network inspect enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:26:42.999440    9312 cli_runner.go:186] Completed: docker network inspect enable-default-cni-20220207210111-8704: (1.3072973s)
	I0207 21:26:42.999440    9312 network_create.go:257] error running [docker network inspect enable-default-cni-20220207210111-8704]: docker network inspect enable-default-cni-20220207210111-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220207210111-8704
	I0207 21:26:42.999440    9312 network_create.go:259] output of [docker network inspect enable-default-cni-20220207210111-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220207210111-8704
	
	** /stderr **
	I0207 21:26:43.004433    9312 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:26:44.368140    9312 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3636999s)
	I0207 21:26:44.393149    9312 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e510] misses:0}
	I0207 21:26:44.393149    9312 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:26:44.393149    9312 network_create.go:106] attempt to create docker network enable-default-cni-20220207210111-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:26:44.399150    9312 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704
	I0207 21:26:47.501168    9312 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704: (3.1020015s)
	I0207 21:26:47.501168    9312 network_create.go:90] docker network enable-default-cni-20220207210111-8704 192.168.49.0/24 created
	I0207 21:26:47.501168    9312 kic.go:106] calculated static IP "192.168.49.2" for the "enable-default-cni-20220207210111-8704" container
	I0207 21:26:47.514163    9312 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:26:48.789741    9312 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.2755091s)
	I0207 21:26:48.797564    9312 cli_runner.go:133] Run: docker volume create enable-default-cni-20220207210111-8704 --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:26:50.168109    9312 cli_runner.go:186] Completed: docker volume create enable-default-cni-20220207210111-8704 --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true: (1.3705371s)
	I0207 21:26:50.168109    9312 oci.go:102] Successfully created a docker volume enable-default-cni-20220207210111-8704
	I0207 21:26:50.176087    9312 cli_runner.go:133] Run: docker run --rm --name enable-default-cni-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --entrypoint /usr/bin/test -v enable-default-cni-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:27:02.993419    9312 cli_runner.go:186] Completed: docker run --rm --name enable-default-cni-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --entrypoint /usr/bin/test -v enable-default-cni-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (12.8172636s)
	I0207 21:27:02.993419    9312 oci.go:106] Successfully prepared a docker volume enable-default-cni-20220207210111-8704
	I0207 21:27:02.993419    9312 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:27:02.993419    9312 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:27:03.000439    9312 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:27:44.769803    9312 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (41.7691444s)
	I0207 21:27:44.769803    9312 kic.go:188] duration metric: took 41.776164 seconds to extract preloaded images to volume
	I0207 21:27:44.774803    9312 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:27:47.266282    9312 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4914653s)
	I0207 21:27:47.266282    9312 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:27:46.0917055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:27:47.272292    9312 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:27:49.593814    9312 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.3211088s)
	I0207 21:27:49.602887    9312 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.49.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:27:52.081294    9312 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.49.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returne
d with exit code 125
	I0207 21:27:52.081482    9312 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.49.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b
4a8: (2.478394s)
	I0207 21:27:52.081554    9312 client.go:171] LocalClient.Create took 1m11.7509054s
	I0207 21:27:54.089052    9312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:27:54.094067    9312 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704
	W0207 21:27:55.399306    9312 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:27:55.399306    9312 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704: (1.3052319s)
	I0207 21:27:55.399306    9312 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:27:55.682231    9312 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704
	W0207 21:27:56.951657    9312 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:27:56.951657    9312 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704: (1.2694189s)
	W0207 21:27:56.951657    9312 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:27:56.951657    9312 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:27:56.951657    9312 start.go:129] duration metric: createHost completed in 1m16.6255583s
	I0207 21:27:56.951657    9312 start.go:80] releasing machines lock for "enable-default-cni-20220207210111-8704", held for 1m16.6262815s
	W0207 21:27:56.951657    9312 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.49.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-1
3302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575
	
	stderr:
	docker: Error response from daemon: network enable-default-cni-20220207210111-8704 not found.
	I0207 21:27:56.965669    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:27:58.356697    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.39102s)
	W0207 21:27:58.356697    9312 start.go:575] delete host: Docker machine "enable-default-cni-20220207210111-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0207 21:27:58.356697    9312 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.49.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.
0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575
	
	stderr:
	docker: Error response from daemon: network enable-default-cni-20220207210111-8704 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.49.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45f
a1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575
	
	stderr:
	docker: Error response from daemon: network enable-default-cni-20220207210111-8704 not found.
	
	I0207 21:27:58.356697    9312 start.go:585] Will try again in 5 seconds ...
	I0207 21:28:03.358493    9312 start.go:313] acquiring machines lock for enable-default-cni-20220207210111-8704: {Name:mka4390d5a4f387ca7899d1e09b326a8e75a7707 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:28:03.358962    9312 start.go:317] acquired machines lock for "enable-default-cni-20220207210111-8704" in 358.4µs
	I0207 21:28:03.359239    9312 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:28:03.359239    9312 fix.go:55] fixHost starting: 
	I0207 21:28:03.372928    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:04.981869    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.6089328s)
	I0207 21:28:04.981869    9312 fix.go:108] recreateIfNeeded on enable-default-cni-20220207210111-8704: state= err=<nil>
	I0207 21:28:04.981869    9312 fix.go:113] machineExists: false. err=machine does not exist
	I0207 21:28:04.984852    9312 out.go:176] * docker "enable-default-cni-20220207210111-8704" container is missing, will recreate.
	I0207 21:28:04.984852    9312 delete.go:124] DEMOLISHING enable-default-cni-20220207210111-8704 ...
	I0207 21:28:04.999843    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:06.623428    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.6235766s)
	I0207 21:28:06.623428    9312 stop.go:79] host is in state 
	I0207 21:28:06.623428    9312 main.go:130] libmachine: Stopping "enable-default-cni-20220207210111-8704"...
	I0207 21:28:06.641434    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:08.204992    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.5634108s)
	I0207 21:28:08.223955    9312 kic_runner.go:93] Run: systemctl --version
	I0207 21:28:08.223955    9312 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220207210111-8704 systemctl --version]
	I0207 21:28:10.791671    9312 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:28:10.791671    9312 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220207210111-8704 sudo service kubelet stop]
	I0207 21:28:13.276707    9312 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575 is not running
	
	** /stderr **
	W0207 21:28:13.276788    9312 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575 is not running
	I0207 21:28:13.290622    9312 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 21:28:13.291192    9312 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220207210111-8704 sudo service kubelet stop]
	I0207 21:28:14.712119    9312 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575 is not running
	
	** /stderr **
	W0207 21:28:14.712119    9312 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575 is not running
	I0207 21:28:14.724114    9312 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 21:28:14.724114    9312 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220207210111-8704 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 21:28:16.201876    9312 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575 is not running
	I0207 21:28:16.201876    9312 kic.go:466] successfully stopped kubernetes!
	I0207 21:28:16.212872    9312 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 21:28:16.212872    9312 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220207210111-8704 pgrep kube-apiserver]
	I0207 21:28:19.137775    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:20.447423    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3095228s)
	I0207 21:28:23.464084    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:24.805030    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3409392s)
	I0207 21:28:27.816194    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:29.129396    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.313195s)
	I0207 21:28:32.141683    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:33.461575    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3198842s)
	I0207 21:28:36.473161    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:37.751421    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.2782533s)
	I0207 21:28:40.762705    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:42.041066    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.2783538s)
	I0207 21:28:45.066673    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:46.795433    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.7287507s)
	I0207 21:28:49.813078    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:51.304533    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4914475s)
	I0207 21:28:54.315985    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:28:55.716158    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3998931s)
	I0207 21:28:58.730208    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:00.064226    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3340104s)
	I0207 21:29:03.077990    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:04.448085    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3699564s)
	I0207 21:29:07.461922    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:08.796516    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3345864s)
	I0207 21:29:11.808177    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:13.216574    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4083898s)
	I0207 21:29:16.233125    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:17.713694    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4803859s)
	I0207 21:29:20.731397    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:22.048118    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3166529s)
	I0207 21:29:25.062224    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:26.423929    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3615386s)
	I0207 21:29:29.437483    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:30.798072    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3605816s)
	I0207 21:29:33.808209    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:35.143372    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3351557s)
	I0207 21:29:38.159040    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:39.439296    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.2802498s)
	I0207 21:29:42.457994    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:43.843565    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3852887s)
	I0207 21:29:46.856104    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:48.164754    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3085065s)
	I0207 21:29:51.176895    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:52.460901    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.2838321s)
	I0207 21:29:55.475945    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:29:56.844104    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3681522s)
	I0207 21:29:59.861625    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:01.274791    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.413158s)
	I0207 21:30:04.287291    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:05.596407    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3091092s)
	I0207 21:30:08.612045    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:10.114684    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.5026307s)
	I0207 21:30:13.139501    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:14.642535    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.5030257s)
	I0207 21:30:17.666238    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:19.143560    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4773144s)
	I0207 21:30:22.159286    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:23.555752    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3964581s)
	I0207 21:30:26.578699    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:27.907245    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3283872s)
	I0207 21:30:30.918950    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:32.251964    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3330064s)
	I0207 21:30:35.264869    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:37.207576    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3370187s)
	I0207 21:30:40.219860    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:41.481446    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.2615786s)
	I0207 21:30:44.494270    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:45.782043    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.2876109s)
	I0207 21:30:48.792875    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:50.106714    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3138314s)
	I0207 21:30:53.120477    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:54.528162    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4072432s)
	I0207 21:30:57.544666    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:30:58.914231    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3694371s)
	I0207 21:31:01.940958    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:03.354186    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4131079s)
	I0207 21:31:06.382062    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:07.739534    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3574648s)
	I0207 21:31:10.750429    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:12.184725    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4342887s)
	I0207 21:31:15.203201    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:16.834817    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.6316078s)
	I0207 21:31:19.849304    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:21.245049    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3957373s)
	I0207 21:31:24.263536    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:25.627709    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3641662s)
	I0207 21:31:28.639049    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:29.893266    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.254101s)
	I0207 21:31:32.913989    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:34.242284    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3282883s)
	I0207 21:31:37.252595    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:38.574689    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3220059s)
	I0207 21:31:41.585889    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:42.907122    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.321226s)
	I0207 21:31:45.927089    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:47.358544    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4314474s)
	I0207 21:31:50.369877    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:51.745140    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3749741s)
	I0207 21:31:54.757577    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:31:56.059959    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3022364s)
	I0207 21:31:59.073568    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:00.463901    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3903255s)
	I0207 21:32:03.474192    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:04.825085    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3508239s)
	I0207 21:32:07.844727    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:09.273621    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4288867s)
	I0207 21:32:12.285549    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:13.685721    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.400164s)
	I0207 21:32:16.700368    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:17.965467    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.2650923s)
	I0207 21:32:20.980048    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:22.321375    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.34132s)
	I0207 21:32:25.340386    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:26.811259    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4708655s)
	I0207 21:32:29.830016    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:31.296098    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4659589s)
	I0207 21:32:34.308659    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:35.745795    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4371283s)
	I0207 21:32:38.756803    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:40.144662    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.387795s)
	I0207 21:32:43.146081    9312 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0207 21:32:43.146081    9312 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 21:32:43.158105    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:44.560808    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.4026958s)
	W0207 21:32:44.560808    9312 delete.go:135] deletehost failed: Docker machine "enable-default-cni-20220207210111-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 21:32:44.567109    9312 cli_runner.go:133] Run: docker container inspect -f {{.Id}} enable-default-cni-20220207210111-8704
	I0207 21:32:46.051265    9312 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220207210111-8704: (1.4841478s)
	I0207 21:32:46.056240    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:47.427390    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.3711421s)
	I0207 21:32:47.432390    9312 cli_runner.go:133] Run: docker exec --privileged -t enable-default-cni-20220207210111-8704 /bin/bash -c "sudo init 0"
	W0207 21:32:48.908996    9312 cli_runner.go:180] docker exec --privileged -t enable-default-cni-20220207210111-8704 /bin/bash -c "sudo init 0" returned with exit code 1
	I0207 21:32:48.908996    9312 cli_runner.go:186] Completed: docker exec --privileged -t enable-default-cni-20220207210111-8704 /bin/bash -c "sudo init 0": (1.4755737s)
	I0207 21:32:48.908996    9312 oci.go:659] error shutdown enable-default-cni-20220207210111-8704: docker exec --privileged -t enable-default-cni-20220207210111-8704 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 151ee41257b6335848fed61d68a8e0b75d9cec66fb778a637b908f2713d1f575 is not running
	I0207 21:32:49.921917    9312 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}
	I0207 21:32:51.668063    9312 cli_runner.go:186] Completed: docker container inspect enable-default-cni-20220207210111-8704 --format={{.State.Status}}: (1.7461369s)
	I0207 21:32:51.668063    9312 oci.go:673] temporary error: container enable-default-cni-20220207210111-8704 status is  but expect it to be exited
	I0207 21:32:51.668063    9312 oci.go:679] Successfully shutdown container enable-default-cni-20220207210111-8704
	I0207 21:32:51.678058    9312 cli_runner.go:133] Run: docker rm -f -v enable-default-cni-20220207210111-8704
	I0207 21:32:53.351445    9312 cli_runner.go:186] Completed: docker rm -f -v enable-default-cni-20220207210111-8704: (1.6733785s)
	I0207 21:32:53.359479    9312 cli_runner.go:133] Run: docker container inspect -f {{.Id}} enable-default-cni-20220207210111-8704
	W0207 21:32:54.793772    9312 cli_runner.go:180] docker container inspect -f {{.Id}} enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:32:54.793772    9312 cli_runner.go:186] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220207210111-8704: (1.4341372s)
	I0207 21:32:54.801140    9312 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:32:56.251116    9312 cli_runner.go:180] docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:32:56.251116    9312 cli_runner.go:186] Completed: docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.4494332s)
	I0207 21:32:56.256126    9312 network_create.go:254] running [docker network inspect enable-default-cni-20220207210111-8704] to gather additional debugging logs...
	I0207 21:32:56.256126    9312 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207210111-8704
	W0207 21:32:57.672669    9312 cli_runner.go:180] docker network inspect enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:32:57.672669    9312 cli_runner.go:186] Completed: docker network inspect enable-default-cni-20220207210111-8704: (1.4165349s)
	I0207 21:32:57.672669    9312 network_create.go:257] error running [docker network inspect enable-default-cni-20220207210111-8704]: docker network inspect enable-default-cni-20220207210111-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220207210111-8704
	I0207 21:32:57.672669    9312 network_create.go:259] output of [docker network inspect enable-default-cni-20220207210111-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220207210111-8704
	
	** /stderr **
	W0207 21:32:57.673667    9312 delete.go:139] delete failed (probably ok) <nil>
	I0207 21:32:57.673667    9312 fix.go:120] Sleeping 1 second for extra luck!
	I0207 21:32:58.674219    9312 start.go:126] createHost starting for "" (driver="docker")
	I0207 21:32:58.677229    9312 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 21:32:58.677229    9312 start.go:160] libmachine.API.Create for "enable-default-cni-20220207210111-8704" (driver="docker")
	I0207 21:32:58.677229    9312 client.go:168] LocalClient.Create starting
	I0207 21:32:58.678219    9312 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0207 21:32:58.678219    9312 main.go:130] libmachine: Decoding PEM data...
	I0207 21:32:58.678219    9312 main.go:130] libmachine: Parsing certificate...
	I0207 21:32:58.678219    9312 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0207 21:32:58.678219    9312 main.go:130] libmachine: Decoding PEM data...
	I0207 21:32:58.678219    9312 main.go:130] libmachine: Parsing certificate...
	I0207 21:32:58.685214    9312 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 21:33:00.058327    9312 cli_runner.go:180] docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 21:33:00.058327    9312 cli_runner.go:186] Completed: docker network inspect enable-default-cni-20220207210111-8704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3731057s)
	I0207 21:33:00.066957    9312 network_create.go:254] running [docker network inspect enable-default-cni-20220207210111-8704] to gather additional debugging logs...
	I0207 21:33:00.067045    9312 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207210111-8704
	W0207 21:33:01.419949    9312 cli_runner.go:180] docker network inspect enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:33:01.419949    9312 cli_runner.go:186] Completed: docker network inspect enable-default-cni-20220207210111-8704: (1.3528969s)
	I0207 21:33:01.419949    9312 network_create.go:257] error running [docker network inspect enable-default-cni-20220207210111-8704]: docker network inspect enable-default-cni-20220207210111-8704: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220207210111-8704
	I0207 21:33:01.419949    9312 network_create.go:259] output of [docker network inspect enable-default-cni-20220207210111-8704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220207210111-8704
	
	** /stderr **
	I0207 21:33:01.428919    9312 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 21:33:02.723650    9312 cli_runner.go:186] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2947238s)
	I0207 21:33:02.742749    9312 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e510] amended:false}} dirty:map[] misses:0}
	I0207 21:33:02.742991    9312 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:33:02.742991    9312 network_create.go:106] attempt to create docker network enable-default-cni-20220207210111-8704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 21:33:02.750153    9312 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704
	W0207 21:33:04.006811    9312 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:33:04.006963    9312 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704: (1.2565706s)
	W0207 21:33:04.007097    9312 network_create.go:98] failed to create docker network enable-default-cni-20220207210111-8704 192.168.49.0/24, will retry: subnet is taken
	I0207 21:33:04.027441    9312 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e510] amended:false}} dirty:map[] misses:0}
	I0207 21:33:04.027441    9312 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:33:04.046663    9312 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e510] amended:true}} dirty:map[192.168.49.0:0xc00014e510 192.168.58.0:0xc000616338] misses:0}
	I0207 21:33:04.046663    9312 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:33:04.046663    9312 network_create.go:106] attempt to create docker network enable-default-cni-20220207210111-8704 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 21:33:04.051250    9312 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704
	W0207 21:33:05.327551    9312 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:33:05.329550    9312 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704: (1.2761992s)
	W0207 21:33:05.329550    9312 network_create.go:98] failed to create docker network enable-default-cni-20220207210111-8704 192.168.58.0/24, will retry: subnet is taken
	I0207 21:33:05.347067    9312 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e510] amended:true}} dirty:map[192.168.49.0:0xc00014e510 192.168.58.0:0xc000616338] misses:1}
	I0207 21:33:05.347067    9312 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:33:05.366094    9312 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e510] amended:true}} dirty:map[192.168.49.0:0xc00014e510 192.168.58.0:0xc000616338 192.168.67.0:0xc00014e7c0] misses:1}
	I0207 21:33:05.366094    9312 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:33:05.366094    9312 network_create.go:106] attempt to create docker network enable-default-cni-20220207210111-8704 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0207 21:33:05.370208    9312 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704
	W0207 21:33:06.808946    9312 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:33:06.809019    9312 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704: (1.4386116s)
	W0207 21:33:06.809064    9312 network_create.go:98] failed to create docker network enable-default-cni-20220207210111-8704 192.168.67.0/24, will retry: subnet is taken
	I0207 21:33:06.828671    9312 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e510] amended:true}} dirty:map[192.168.49.0:0xc00014e510 192.168.58.0:0xc000616338 192.168.67.0:0xc00014e7c0] misses:2}
	I0207 21:33:06.828778    9312 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:33:06.845878    9312 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e510] amended:true}} dirty:map[192.168.49.0:0xc00014e510 192.168.58.0:0xc000616338 192.168.67.0:0xc00014e7c0 192.168.76.0:0xc0006163d8] misses:2}
	I0207 21:33:06.845878    9312 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 21:33:06.845878    9312 network_create.go:106] attempt to create docker network enable-default-cni-20220207210111-8704 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0207 21:33:06.851884    9312 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704
	I0207 21:33:09.443757    9312 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207210111-8704: (2.59186s)
	I0207 21:33:09.443967    9312 network_create.go:90] docker network enable-default-cni-20220207210111-8704 192.168.76.0/24 created
	I0207 21:33:09.443967    9312 kic.go:106] calculated static IP "192.168.76.2" for the "enable-default-cni-20220207210111-8704" container
	I0207 21:33:09.461106    9312 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 21:33:10.845051    9312 cli_runner.go:186] Completed: docker ps -a --format {{.Names}}: (1.3839381s)
	I0207 21:33:10.852045    9312 cli_runner.go:133] Run: docker volume create enable-default-cni-20220207210111-8704 --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true
	I0207 21:33:12.245235    9312 cli_runner.go:186] Completed: docker volume create enable-default-cni-20220207210111-8704 --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true: (1.3931223s)
	I0207 21:33:12.245303    9312 oci.go:102] Successfully created a docker volume enable-default-cni-20220207210111-8704
	I0207 21:33:12.251040    9312 cli_runner.go:133] Run: docker run --rm --name enable-default-cni-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --entrypoint /usr/bin/test -v enable-default-cni-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 21:33:17.496960    9312 cli_runner.go:186] Completed: docker run --rm --name enable-default-cni-20220207210111-8704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --entrypoint /usr/bin/test -v enable-default-cni-20220207210111-8704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib: (5.2448943s)
	I0207 21:33:17.496960    9312 oci.go:106] Successfully prepared a docker volume enable-default-cni-20220207210111-8704
	I0207 21:33:17.496960    9312 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 21:33:17.496960    9312 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 21:33:17.504951    9312 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 21:34:01.748658    9312 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207210111-8704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (44.2434787s)
	I0207 21:34:01.748658    9312 kic.go:188] duration metric: took 44.251469 seconds to extract preloaded images to volume
	I0207 21:34:01.758669    9312 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:34:04.174474    9312 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.4157921s)
	I0207 21:34:04.174474    9312 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:34:03.0636646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:34:04.183472    9312 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 21:34:06.474190    9312 cli_runner.go:186] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2906021s)
	I0207 21:34:06.481005    9312 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.76.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 21:34:08.836050    9312 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.76.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returne
d with exit code 125
	I0207 21:34:08.836050    9312 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.76.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b
4a8: (2.3550325s)
	I0207 21:34:08.836050    9312 client.go:171] LocalClient.Create took 1m10.1584589s
	I0207 21:34:10.844014    9312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:34:10.849984    9312 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704
	W0207 21:34:12.250766    9312 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:34:12.250766    9312 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704: (1.4007751s)
	I0207 21:34:12.250766    9312 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:12.548254    9312 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704
	W0207 21:34:13.873860    9312 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:34:13.873860    9312 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704: (1.3255999s)
	W0207 21:34:13.873860    9312 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:34:13.873860    9312 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:13.873860    9312 start.go:129] duration metric: createHost completed in 1m15.1992537s
	I0207 21:34:13.881851    9312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:34:13.887839    9312 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704
	W0207 21:34:15.174033    9312 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:34:15.174033    9312 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704: (1.2860267s)
	I0207 21:34:15.174033    9312 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:15.411561    9312 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704
	W0207 21:34:16.712913    9312 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704 returned with exit code 1
	I0207 21:34:16.712913    9312 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207210111-8704: (1.3013452s)
	W0207 21:34:16.713472    9312 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 21:34:16.713472    9312 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 21:34:16.713524    9312 fix.go:57] fixHost completed within 6m13.3523089s
	I0207 21:34:16.713551    9312 start.go:80] releasing machines lock for "enable-default-cni-20220207210111-8704", held for 6m13.3526137s
	W0207 21:34:16.714253    9312 out.go:241] * Failed to start docker container. Running "minikube delete -p enable-default-cni-20220207210111-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.76.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --pub
lish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	a666e388ca6bbedca8077d030374f58b35cf349356cdc392dd074dd9e25882bb
	
	stderr:
	docker: Error response from daemon: network enable-default-cni-20220207210111-8704 not found.
	
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-20220207210111-8704" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.76.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::
32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	a666e388ca6bbedca8077d030374f58b35cf349356cdc392dd074dd9e25882bb
	
	stderr:
	docker: Error response from daemon: network enable-default-cni-20220207210111-8704 not found.
	
	I0207 21:34:16.722046    9312 out.go:176] 
	W0207 21:34:16.722674    9312 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.76.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-m
inikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	a666e388ca6bbedca8077d030374f58b35cf349356cdc392dd074dd9e25882bb
	
	stderr:
	docker: Error response from daemon: network enable-default-cni-20220207210111-8704 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207210111-8704 --name enable-default-cni-20220207210111-8704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207210111-8704 --network enable-default-cni-20220207210111-8704 --ip 192.168.76.2 --volume enable-default-cni-20220207210111-8704:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-
13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	a666e388ca6bbedca8077d030374f58b35cf349356cdc392dd074dd9e25882bb
	
	stderr:
	docker: Error response from daemon: network enable-default-cni-20220207210111-8704 not found.
	
	W0207 21:34:16.722674    9312 out.go:241] * 
	* 
	W0207 21:34:16.723707    9312 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 21:34:16.728382    9312 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (466.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (64.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220207213422-8704 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220207213422-8704 --alsologtostderr -v=1: exit status 80 (8.8592075s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20220207213422-8704 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 21:53:32.394214    9288 out.go:297] Setting OutFile to fd 1940 ...
	I0207 21:53:32.485196    9288 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:53:32.485196    9288 out.go:310] Setting ErrFile to fd 1544...
	I0207 21:53:32.485196    9288 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:53:32.499203    9288 out.go:304] Setting JSON to false
	I0207 21:53:32.499203    9288 mustload.go:65] Loading cluster: old-k8s-version-20220207213422-8704
	I0207 21:53:32.500200    9288 config.go:176] Loaded profile config "old-k8s-version-20220207213422-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 21:53:32.515201    9288 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207213422-8704 --format={{.State.Status}}
	I0207 21:53:35.956697    9288 cli_runner.go:186] Completed: docker container inspect old-k8s-version-20220207213422-8704 --format={{.State.Status}}: (3.441478s)
	I0207 21:53:35.956697    9288 host.go:66] Checking if "old-k8s-version-20220207213422-8704" exists ...
	I0207 21:53:35.962704    9288 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220207213422-8704
	I0207 21:53:37.363101    9288 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220207213422-8704: (1.4003897s)
	I0207 21:53:37.365053    9288 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0207 21:53:37.371127    9288 out.go:176] * Pausing node old-k8s-version-20220207213422-8704 ... 
	I0207 21:53:37.371127    9288 host.go:66] Checking if "old-k8s-version-20220207213422-8704" exists ...
	I0207 21:53:37.381029    9288 ssh_runner.go:195] Run: systemctl --version
	I0207 21:53:37.385835    9288 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207213422-8704
	I0207 21:53:38.727233    9288 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207213422-8704: (1.3413901s)
	I0207 21:53:38.727233    9288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49999 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-20220207213422-8704\id_rsa Username:docker}
	I0207 21:53:38.873493    9288 ssh_runner.go:235] Completed: systemctl --version: (1.4924562s)
	I0207 21:53:38.882347    9288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 21:53:38.911779    9288 pause.go:50] kubelet running: true
	I0207 21:53:38.920592    9288 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0207 21:53:39.372994    9288 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0207 21:53:39.661832    9288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 21:53:39.696081    9288 pause.go:50] kubelet running: true
	I0207 21:53:39.704292    9288 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0207 21:53:40.031165    9288 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0207 21:53:40.582706    9288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 21:53:40.613902    9288 pause.go:50] kubelet running: true
	I0207 21:53:40.624727    9288 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0207 21:53:40.987165    9288 out.go:176] 
	W0207 21:53:40.987973    9288 out.go:241] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0207 21:53:40.988038    9288 out.go:241] * 
	* 
	W0207 21:53:40.994655    9288 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_pause_26475df06b51455fca7312b7aad83667d1d3f5a8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_pause_26475df06b51455fca7312b7aad83667d1d3f5a8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 21:53:40.996763    9288 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220207213422-8704 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207213422-8704

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:232: (dbg) Done: docker inspect old-k8s-version-20220207213422-8704: (1.4515044s)
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207213422-8704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9",
	        "Created": "2022-02-07T21:41:46.3675129Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-07T21:45:35.5040323Z",
	            "FinishedAt": "2022-02-07T21:45:11.4209185Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/hostname",
	        "HostsPath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/hosts",
	        "LogPath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9-json.log",
	        "Name": "/old-k8s-version-20220207213422-8704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207213422-8704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207213422-8704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3-init/diff:/var/lib/docker/overlay2/75e1ee3c034aacc956b8c3ecc7ab61ac5c38e660082589ceb37efd240a771cc5/diff:/var/lib/docker/overlay2/189fe0ac50cbe021b1f58d4d3552848c814165ab41c880cc414a3d772ecf8a17/diff:/var/lib/docker/overlay2/1825c50829a945a491708c366e3adc3e6d891ec2fcbd7f13b41f06c64baa55d9/diff:/var/lib/docker/overlay2/0b9358d8d7de1369e9019714824c8f1007b6c08b3ebf296b7b1288610816a2ce/diff:/var/lib/docker/overlay2/689f6514ad269d91cd1861629d1949b077031e825417ef4dfb5621888699407b/diff:/var/lib/docker/overlay2/8dff862a1c6a46807e22567df5955e49a8aa3d0a1f2ad45ca46f2ab5374556fe/diff:/var/lib/docker/overlay2/ee466d69c85d056ef8068fd5652d1a05e5ca08f4f2880d8156cd2f212ceaaaa6/diff:/var/lib/docker/overlay2/86890d1d8e6826b123ee9ec4c463f6f91ad837f07b7147e0c6ef8c7e17b601da/diff:/var/lib/docker/overlay2/b657d041c7bdb28ab2fd58a8e3615ec574e7e5fcace80e88f630332a1ff67ff7/diff:/var/lib/docker/overlay2/4339b0
c7baf085cb3dc647fb19cd967f89fdd4e316e2bc806815c81fc17efc59/diff:/var/lib/docker/overlay2/36993c24ec6e3eb908331a1c00b702e3326415b7124d4d1788747ba328eb6e2a/diff:/var/lib/docker/overlay2/5b68d569c7973aeabb60b4d744a1b86cc3ebb8b284e55bbbe33576e97e3ac021/diff:/var/lib/docker/overlay2/57b6ab85187eac783753b7bdcafb75e9d26d3e9d22b614bbfa42fbf4a6e879f8/diff:/var/lib/docker/overlay2/e5f2f9b80a695305ffbe047f65db35cc276ac41f987ec84a5742b3769918cb79/diff:/var/lib/docker/overlay2/06d7d08e9ebfbe3202537757cc03ccaa87b749e7dd8354ae1978c44a1b14a690/diff:/var/lib/docker/overlay2/44604b9a5d1c918e1d3ebe374cc5b01af83b10aef4cbf54e72d7fd0b7be60646/diff:/var/lib/docker/overlay2/9d28038d0516655f0a12f3ec5220089de0a54540a27220e4f412dd3acc577f9b/diff:/var/lib/docker/overlay2/ec704366d20c2f84ce0d53c1b278507dc9cc66331cba15d90521a96d118d45af/diff:/var/lib/docker/overlay2/32b5b8eb800bf64445a63842604512878f22712d00a869b2104a1b528d6e8010/diff:/var/lib/docker/overlay2/6ff5152a44a5b0fd36c63aa1c7199ff420477113981a2dd750c29f82e1509669/diff:/var/lib/d
ocker/overlay2/b42f3edd75dd995daac9924998fafd7fe1b919f222b8185a3dfeef9a762660c7/diff:/var/lib/docker/overlay2/3cd19c2de3ea2cc271124c2c82db46bf5f550625dd02a5cde5c517af93c73caa/diff:/var/lib/docker/overlay2/b41830a6d20150650c5fb37bb60e7c06147734911fda7300a739cd023bb4789a/diff:/var/lib/docker/overlay2/925bf7a180aeb21aee1f13bf31ccc1f05a642fd383aabb499148885dcac5cfeb/diff:/var/lib/docker/overlay2/a5ec93ff5dc3e9d4a9975d8f1176019d102f9e8c319a4d5016f842be26bb5671/diff:/var/lib/docker/overlay2/37e01c18dc12ba0b9bd89093b244ef29456df1fb30fc4a8c3e5596b7b56ada0a/diff:/var/lib/docker/overlay2/6ce0b6587d0750a0ef5383637b91df31d4c1619e3a494b84c8714c5beebf1dbc/diff:/var/lib/docker/overlay2/8f4e875a02344a4926d7f5ad052151ca0eef0364a189b7ca60ebb338213d7c8e/diff:/var/lib/docker/overlay2/2790936ada4be199505c2cab1447b90a25076c4d2cbceadeb4a52026c71b9c60/diff:/var/lib/docker/overlay2/231fcc4021464c7f510cca7eecaabc94216fcc70cb62f97465c0d546064b25b8/diff:/var/lib/docker/overlay2/30845ecf75e8fd0fa04703004fc686bb8aff8eabe9437f4e7a1096a5bca
060a3/diff:/var/lib/docker/overlay2/3ae1acee47e31df704424e5e9dbaed72199c1cb3a318825a84cc9d2f08f1d807/diff:/var/lib/docker/overlay2/f9fe697b5ffab06c3cc31c3e2b7d924c32d4f0f4ee8fd29cb5e2b46e586b4d4d/diff:/var/lib/docker/overlay2/68afa844b9fe835f1997b14fe394dac6238ee6a39aa0abfc34a93c062d58f819/diff:/var/lib/docker/overlay2/94b84dda68e5a3dbf4319437e5d026f2c5c705496ca2d9922f7e865879146b56/diff:/var/lib/docker/overlay2/f133dd3fe2bf48f8bd9dced36254f4cc973685d2ddde9ee6e0f2467ea7d34592/diff:/var/lib/docker/overlay2/dafd5505dd817285a71ea03b36fb5684a0c844441c07c909d1e6c47b874b33d4/diff:/var/lib/docker/overlay2/c714cab2096f6325d72b4b73673c329c5db40f169c0d6d5d034bf8af87b90983/diff:/var/lib/docker/overlay2/ea71191eaaa01123105da39dc897cb6e11c028c8a2e91dc62ff85bb5e0fb1884/diff:/var/lib/docker/overlay2/6c554fb0a2463d3ef05cdb7858f9788626b5c72dbb4ea5a0431ec665de90dc74/diff:/var/lib/docker/overlay2/01e92d0b67f2be5d7d6ba3f84ffac8ad1e0c516b03b45346070503f62de32e5a/diff:/var/lib/docker/overlay2/f5f6f40c4df999e1ae2e5733fa6aad1cf8963e
bd6e2b9f849164ca5c149a4262/diff:/var/lib/docker/overlay2/e1eb2f89916ebfdb9a8d5aacfd9618edc370a018de0114d193b6069979c02aa7/diff:/var/lib/docker/overlay2/0e35d26329f1b7cf4e1b2bb03588192d3ea37764eab1ccc5a598db2164c932d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207213422-8704",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207213422-8704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207213422-8704",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207213422-8704",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207213422-8704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5996e08974d441703c051b7594bb97555ad737d892669f602b09532b17f8ea5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49999"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49996"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49997"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49998"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5996e08974d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207213422-8704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d91c432d30a6",
	                        "old-k8s-version-20220207213422-8704"
	                    ],
	                    "NetworkID": "ddff198b4b6acdf48ebd04350383f2ae95e5423309282253f56b82f0624e3210",
	                    "EndpointID": "705ccdbff0911016e67f4bba958ec8f4a31a360bcce5f09543d0d5dd475e45a8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704
E0207 21:53:42.711146    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704: (8.106072s)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20220207213422-8704 logs -n 25
E0207 21:53:54.903590    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:54.909078    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:54.919680    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:54.939739    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:54.981259    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:55.062977    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:55.222991    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:55.543497    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:55.684578    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 21:53:56.184501    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
E0207 21:53:57.465436    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-20220207213422-8704 logs -n 25: (10.1137567s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |       User        | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:39:24 GMT | Mon, 07 Feb 2022 21:46:36 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |                   |         |                               |                               |
	|         | --driver=docker                                            |                                                |                   |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                |                   |         |                               |                               |
	| start   | -p no-preload-20220207213438-8704                          | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:39:38 GMT | Mon, 07 Feb 2022 21:46:51 GMT |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |                   |         |                               |                               |
	|         | --driver=docker                                            |                                                |                   |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:46:58 GMT | Mon, 07 Feb 2022 21:47:06 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| pause   | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:06 GMT | Mon, 07 Feb 2022 21:47:13 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:23 GMT | Mon, 07 Feb 2022 21:47:31 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:29 GMT | Mon, 07 Feb 2022 21:47:36 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| pause   | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:31 GMT | Mon, 07 Feb 2022 21:47:38 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| unpause | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:54 GMT | Mon, 07 Feb 2022 21:48:02 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:52 GMT | Mon, 07 Feb 2022 21:48:34 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:48:35 GMT | Mon, 07 Feb 2022 21:48:58 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:41:32 GMT | Mon, 07 Feb 2022 21:48:58 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:48:29 GMT | Mon, 07 Feb 2022 21:49:02 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:03 GMT | Mon, 07 Feb 2022 21:49:18 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:19 GMT | Mon, 07 Feb 2022 21:49:27 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:27 GMT | Mon, 07 Feb 2022 21:49:34 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:49 GMT | Mon, 07 Feb 2022 21:50:00 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:50:16 GMT | Mon, 07 Feb 2022 21:50:53 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:50:54 GMT | Mon, 07 Feb 2022 21:51:07 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	| start   | -p newest-cni-20220207214858-8704 --memory=2200            | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:48:58 GMT | Mon, 07 Feb 2022 21:51:49 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.23.4-rc.0          |                                                |                   |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:51:49 GMT | Mon, 07 Feb 2022 21:51:55 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:51:56 GMT | Mon, 07 Feb 2022 21:52:19 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |                   |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:52:22 GMT | Mon, 07 Feb 2022 21:52:25 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20220207213422-8704            | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:45:21 GMT | Mon, 07 Feb 2022 21:53:06 GMT |
	|         | old-k8s-version-20220207213422-8704                        |                                                |                   |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |                   |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |                   |         |                               |                               |
	|         | --keep-context=false                                       |                                                |                   |         |                               |                               |
	|         | --driver=docker                                            |                                                |                   |         |                               |                               |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20220207213422-8704            | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:53:24 GMT | Mon, 07 Feb 2022 21:53:32 GMT |
	|         | old-k8s-version-20220207213422-8704                        |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| start   | -p newest-cni-20220207214858-8704 --memory=2200            | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:52:25 GMT | Mon, 07 Feb 2022 21:53:41 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.23.4-rc.0          |                                                |                   |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 21:52:25
	Running on machine: minikube3
	Binary: Built with gc go1.17.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 21:52:25.612740   12092 out.go:297] Setting OutFile to fd 1840 ...
	I0207 21:52:25.681279   12092 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:52:25.681279   12092 out.go:310] Setting ErrFile to fd 1916...
	I0207 21:52:25.681279   12092 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:52:25.693347   12092 out.go:304] Setting JSON to false
	I0207 21:52:25.696967   12092 start.go:112] hostinfo: {"hostname":"minikube3","uptime":438364,"bootTime":1643832381,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:52:25.697107   12092 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:52:25.700834   12092 out.go:176] * [newest-cni-20220207214858-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:52:25.701396   12092 notify.go:174] Checking for updates...
	I0207 21:52:25.704330   12092 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:52:25.706886   12092 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:52:25.709435   12092 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:52:23.140074   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:23.140074   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:23.140074   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:23.140074   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:23.140074   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:23.140074   11580 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:25.711461   12092 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:52:25.712936   12092 config.go:176] Loaded profile config "newest-cni-20220207214858-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4-rc.0
	I0207 21:52:25.713657   12092 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:52:28.404636   12092 docker.go:132] docker version: linux-20.10.12
	I0207 21:52:28.412122   12092 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:52:30.643608   12092 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.2314744s)
	I0207 21:52:30.644716   12092 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:52:29.5628527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:52:26.433917   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:26.434131   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:26.434131   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:26.434131   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:26.434131   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:26.434131   11580 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:30.553700   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:30.553700   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:30.553700   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:30.553700   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:30.553700   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:30.553700   11580 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:30.649519   12092 out.go:176] * Using the docker driver based on existing profile
	I0207 21:52:30.650138   12092 start.go:281] selected driver: docker
	I0207 21:52:30.650138   12092 start.go:798] validating driver "docker" against &{Name:newest-cni-20220207214858-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fak
e.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:52:30.650279   12092 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:52:30.773816   12092 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:52:32.908476   12092 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.1346484s)
	I0207 21:52:32.909102   12092 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:52:31.8993857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:52:32.909558   12092 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 21:52:32.909829   12092 start_flags.go:850] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0207 21:52:32.909829   12092 cni.go:93] Creating CNI manager for ""
	I0207 21:52:32.909829   12092 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:52:32.909829   12092 start_flags.go:302] config:
	{Name:newest-cni-20220207214858-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:52:32.915087   12092 out.go:176] * Starting control plane node newest-cni-20220207214858-8704 in cluster newest-cni-20220207214858-8704
	I0207 21:52:32.915166   12092 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:52:32.917772   12092 out.go:176] * Pulling base image ...
	I0207 21:52:32.917772   12092 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 21:52:32.918461   12092 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:52:32.918461   12092 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4
	I0207 21:52:32.918461   12092 cache.go:57] Caching tarball of preloaded images
	I0207 21:52:32.918461   12092 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:52:32.919238   12092 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4-rc.0 on docker
	I0207 21:52:32.919238   12092 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\config.json ...
	I0207 21:52:34.137225   12092 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:52:34.137225   12092 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:52:34.137344   12092 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:52:34.137415   12092 start.go:313] acquiring machines lock for newest-cni-20220207214858-8704: {Name:mk8bddd86d66d3fbf3b41ac62ecc647889d08fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:52:34.137744   12092 start.go:317] acquired machines lock for "newest-cni-20220207214858-8704" in 197.6µs
	I0207 21:52:34.137932   12092 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:52:34.137974   12092 fix.go:55] fixHost starting: 
	I0207 21:52:34.153034   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:52:35.374078   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.2209656s)
	I0207 21:52:35.374152   12092 fix.go:108] recreateIfNeeded on newest-cni-20220207214858-8704: state=Stopped err=<nil>
	W0207 21:52:35.374152   12092 fix.go:134] unexpected machine state, will restart: <nil>
	I0207 21:52:35.377655   12092 out.go:176] * Restarting existing docker container for "newest-cni-20220207214858-8704" ...
	I0207 21:52:35.382901   12092 cli_runner.go:133] Run: docker start newest-cni-20220207214858-8704
	I0207 21:52:39.750086   12092 cli_runner.go:186] Completed: docker start newest-cni-20220207214858-8704: (4.3670036s)
	I0207 21:52:39.757156   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:52:36.967785   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:36.967785   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:36.967785   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:36.967785   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:36.967785   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:36.967785   11580 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:41.105203   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.3479631s)
	I0207 21:52:41.105319   12092 kic.go:420] container "newest-cni-20220207214858-8704" state is running.
	I0207 21:52:41.117161   12092 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704
	I0207 21:52:42.352892   12092 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704: (1.2357245s)
	I0207 21:52:42.352892   12092 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\config.json ...
	I0207 21:52:42.355528   12092 machine.go:88] provisioning docker machine ...
	I0207 21:52:42.355528   12092 ubuntu.go:169] provisioning hostname "newest-cni-20220207214858-8704"
	I0207 21:52:42.364296   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:43.596951   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2325634s)
	I0207 21:52:43.600405   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:43.600894   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:43.600894   12092 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220207214858-8704 && echo "newest-cni-20220207214858-8704" | sudo tee /etc/hostname
	I0207 21:52:43.833045   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20220207214858-8704
	
	I0207 21:52:43.840974   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:45.088645   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2476645s)
	I0207 21:52:45.092526   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:45.092890   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:45.092959   12092 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220207214858-8704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220207214858-8704/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220207214858-8704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 21:52:45.301995   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 21:52:45.301995   12092 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0207 21:52:45.302945   12092 ubuntu.go:177] setting up certificates
	I0207 21:52:45.302945   12092 provision.go:83] configureAuth start
	I0207 21:52:45.308923   12092 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704
	I0207 21:52:43.043473   11580 system_pods.go:86] 6 kube-system pods found
	I0207 21:52:43.044025   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:43.044025   11580 system_pods.go:89] "kube-apiserver-old-k8s-version-20220207213422-8704" [3bbc6549-ecd2-4235-9ca4-c3f9ab75d868] Pending
	I0207 21:52:43.044025   11580 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220207213422-8704" [95e98028-a3f7-4bea-ae02-72175664e79e] Running
	I0207 21:52:43.044025   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:43.044133   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:43.044133   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:43.044172   11580 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-scheduler
	I0207 21:52:46.556865   12092 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704: (1.2478331s)
	I0207 21:52:46.557001   12092 provision.go:138] copyHostCerts
	I0207 21:52:46.557123   12092 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0207 21:52:46.557123   12092 exec_runner.go:207] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0207 21:52:46.557727   12092 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0207 21:52:46.558574   12092 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0207 21:52:46.558574   12092 exec_runner.go:207] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0207 21:52:46.559284   12092 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0207 21:52:46.560024   12092 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0207 21:52:46.560024   12092 exec_runner.go:207] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0207 21:52:46.560655   12092 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0207 21:52:46.561611   12092 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20220207214858-8704 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220207214858-8704]
	I0207 21:52:46.703644   12092 provision.go:172] copyRemoteCerts
	I0207 21:52:46.711687   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 21:52:46.715628   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:47.929502   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2138001s)
	I0207 21:52:47.929779   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:48.025178   12092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3134843s)
	I0207 21:52:48.026076   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	I0207 21:52:48.089621   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0207 21:52:48.157862   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0207 21:52:48.213126   12092 provision.go:86] duration metric: configureAuth took 2.9101205s
	I0207 21:52:48.213260   12092 ubuntu.go:193] setting minikube options for container-runtime
	I0207 21:52:48.213880   12092 config.go:176] Loaded profile config "newest-cni-20220207214858-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4-rc.0
	I0207 21:52:48.220584   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:49.434272   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2135513s)
	I0207 21:52:49.437504   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:49.437568   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:49.437568   12092 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0207 21:52:49.635937   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0207 21:52:49.635937   12092 ubuntu.go:71] root file system type: overlay
	I0207 21:52:49.635937   12092 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0207 21:52:49.642712   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:50.917706   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2749875s)
	I0207 21:52:50.920701   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:50.921402   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:50.921402   12092 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0207 21:52:51.164094   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0207 21:52:51.170890   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:52.385373   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2144196s)
	I0207 21:52:52.388813   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:52.389305   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:52.389355   12092 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0207 21:52:52.608492   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 21:52:52.608633   12092 machine.go:91] provisioned docker machine in 10.2530518s
	I0207 21:52:52.608633   12092 start.go:267] post-start starting for "newest-cni-20220207214858-8704" (driver="docker")
	I0207 21:52:52.608633   12092 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0207 21:52:52.618322   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0207 21:52:52.624142   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:53.841626   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2174771s)
	I0207 21:52:53.841954   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:53.995206   12092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3768775s)
	I0207 21:52:54.004167   12092 ssh_runner.go:195] Run: cat /etc/os-release
	I0207 21:52:54.018264   12092 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0207 21:52:54.018264   12092 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0207 21:52:54.018264   12092 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0207 21:52:54.018264   12092 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0207 21:52:54.018264   12092 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0207 21:52:54.018264   12092 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0207 21:52:54.019983   12092 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem -> 87042.pem in /etc/ssl/certs
	I0207 21:52:54.028456   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0207 21:52:54.057504   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem --> /etc/ssl/certs/87042.pem (1708 bytes)
	I0207 21:52:54.117720   12092 start.go:270] post-start completed in 1.5090794s
	I0207 21:52:54.126741   12092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:52:54.131278   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:55.347957   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2165774s)
	I0207 21:52:55.348187   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:53.564458   11580 system_pods.go:86] 7 kube-system pods found
	I0207 21:52:53.564579   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:53.564579   11580 system_pods.go:89] "kube-apiserver-old-k8s-version-20220207213422-8704" [3bbc6549-ecd2-4235-9ca4-c3f9ab75d868] Running
	I0207 21:52:53.564579   11580 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220207213422-8704" [95e98028-a3f7-4bea-ae02-72175664e79e] Running
	I0207 21:52:53.564579   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:53.564665   11580 system_pods.go:89] "kube-scheduler-old-k8s-version-20220207213422-8704" [5973b363-469a-49c1-95fe-942330710096] Running
	I0207 21:52:53.564684   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:53.564718   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:53.564718   11580 retry.go:31] will retry after 12.194240946s: missing components: etcd
	I0207 21:52:55.497090   12092 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3703418s)
	I0207 21:52:55.497090   12092 fix.go:57] fixHost completed within 21.3590062s
	I0207 21:52:55.497090   12092 start.go:80] releasing machines lock for "newest-cni-20220207214858-8704", held for 21.359237s
	I0207 21:52:55.503412   12092 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704
	I0207 21:52:56.733077   12092 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704: (1.2296593s)
	I0207 21:52:56.734541   12092 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0207 21:52:56.742177   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:56.742348   12092 ssh_runner.go:195] Run: systemctl --version
	I0207 21:52:56.748583   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:58.032618   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2904342s)
	I0207 21:52:58.032812   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2842223s)
	I0207 21:52:58.032812   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:58.032812   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:58.285602   12092 ssh_runner.go:235] Completed: systemctl --version: (1.5432468s)
	I0207 21:52:58.285602   12092 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5509939s)
	I0207 21:52:58.294433   12092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0207 21:52:58.338701   12092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 21:52:58.373766   12092 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0207 21:52:58.382656   12092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0207 21:52:58.414235   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0207 21:52:58.469447   12092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0207 21:52:58.667288   12092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0207 21:52:58.825486   12092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 21:52:58.873509   12092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0207 21:52:59.030780   12092 ssh_runner.go:195] Run: sudo systemctl start docker
	I0207 21:52:59.068784   12092 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 21:52:59.224982   12092 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 21:52:59.328538   12092 out.go:203] * Preparing Kubernetes v1.23.4-rc.0 on Docker 20.10.12 ...
	I0207 21:52:59.334389   12092 cli_runner.go:133] Run: docker exec -t newest-cni-20220207214858-8704 dig +short host.docker.internal
	I0207 21:53:01.013882   12092 cli_runner.go:186] Completed: docker exec -t newest-cni-20220207214858-8704 dig +short host.docker.internal: (1.6794845s)
	I0207 21:53:01.014108   12092 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0207 21:53:01.021099   12092 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0207 21:53:01.037417   12092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 21:53:01.076350   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:02.309071   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2327146s)
	I0207 21:53:02.312133   12092 out.go:176]   - kubelet.network-plugin=cni
	I0207 21:53:02.314914   12092 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0207 21:53:02.317575   12092 out.go:176]   - kubelet.housekeeping-interval=5m
	I0207 21:53:02.317742   12092 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 21:53:02.322051   12092 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 21:53:02.403000   12092 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 21:53:02.403000   12092 docker.go:537] Images already preloaded, skipping extraction
	I0207 21:53:02.409668   12092 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 21:53:02.486517   12092 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 21:53:02.486627   12092 cache_images.go:84] Images are preloaded, skipping loading
	I0207 21:53:02.493907   12092 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0207 21:53:02.728787   12092 cni.go:93] Creating CNI manager for ""
	I0207 21:53:02.728891   12092 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:53:02.728980   12092 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0207 21:53:02.729061   12092 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.4-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220207214858-8704 NodeName:newest-cni-20220207214858-8704 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:f
alse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0207 21:53:02.729460   12092 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220207214858-8704"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0207 21:53:02.729549   12092 kubeadm.go:935] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220207214858-8704 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0207 21:53:02.739914   12092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0
	I0207 21:53:02.770163   12092 binaries.go:44] Found k8s binaries, skipping transfer
	I0207 21:53:02.777986   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0207 21:53:02.805729   12092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (446 bytes)
	I0207 21:53:02.848621   12092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0207 21:53:02.895865   12092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2193 bytes)
	I0207 21:53:02.943961   12092 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0207 21:53:02.958528   12092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 21:53:02.982842   12092 certs.go:54] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704 for IP: 192.168.58.2
	I0207 21:53:02.983397   12092 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0207 21:53:02.983837   12092 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0207 21:53:02.984382   12092 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\client.key
	I0207 21:53:02.984783   12092 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\apiserver.key.cee25041
	I0207 21:53:02.985086   12092 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\proxy-client.key
	I0207 21:53:02.986340   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\8704.pem (1338 bytes)
	W0207 21:53:02.986693   12092 certs.go:384] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\8704_empty.pem, impossibly tiny 0 bytes
	I0207 21:53:02.986693   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0207 21:53:02.986693   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0207 21:53:02.986693   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0207 21:53:02.987559   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0207 21:53:02.988157   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem (1708 bytes)
	I0207 21:53:02.989465   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0207 21:53:03.046310   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0207 21:53:03.125464   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0207 21:53:03.183649   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0207 21:53:03.241576   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0207 21:53:03.314572   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0207 21:53:03.379249   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0207 21:53:03.437714   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0207 21:53:03.494072   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\8704.pem --> /usr/share/ca-certificates/8704.pem (1338 bytes)
	I0207 21:53:03.554080   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem --> /usr/share/ca-certificates/87042.pem (1708 bytes)
	I0207 21:53:03.609098   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0207 21:53:03.664807   12092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0207 21:53:03.722757   12092 ssh_runner.go:195] Run: openssl version
	I0207 21:53:03.748843   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8704.pem && ln -fs /usr/share/ca-certificates/8704.pem /etc/ssl/certs/8704.pem"
	I0207 21:53:03.780365   12092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8704.pem
	I0207 21:53:03.790364   12092 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  7 19:41 /usr/share/ca-certificates/8704.pem
	I0207 21:53:03.796358   12092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8704.pem
	I0207 21:53:03.829753   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8704.pem /etc/ssl/certs/51391683.0"
	I0207 21:53:03.864247   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/87042.pem && ln -fs /usr/share/ca-certificates/87042.pem /etc/ssl/certs/87042.pem"
	I0207 21:53:03.897877   12092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/87042.pem
	I0207 21:53:03.915153   12092 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  7 19:41 /usr/share/ca-certificates/87042.pem
	I0207 21:53:03.923897   12092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/87042.pem
	I0207 21:53:03.948331   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/87042.pem /etc/ssl/certs/3ec20f2e.0"
	I0207 21:53:03.980995   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0207 21:53:04.019107   12092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0207 21:53:04.035727   12092 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  7 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I0207 21:53:04.044981   12092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0207 21:53:04.068862   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0207 21:53:04.100360   12092 kubeadm.go:390] StartCluster: {Name:newest-cni-20220207214858-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComp
onents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:53:04.108510   12092 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 21:53:04.197302   12092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0207 21:53:04.228740   12092 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0207 21:53:04.228829   12092 kubeadm.go:600] restartCluster start
	I0207 21:53:04.238602   12092 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0207 21:53:04.266891   12092 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:04.273383   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:05.778460   11580 system_pods.go:86] 8 kube-system pods found
	I0207 21:53:05.778514   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:53:05.778581   11580 system_pods.go:89] "etcd-old-k8s-version-20220207213422-8704" [9843ed1a-4d7b-44a3-89b6-1fe62845b7e8] Running
	I0207 21:53:05.778625   11580 system_pods.go:89] "kube-apiserver-old-k8s-version-20220207213422-8704" [3bbc6549-ecd2-4235-9ca4-c3f9ab75d868] Running
	I0207 21:53:05.778662   11580 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220207213422-8704" [95e98028-a3f7-4bea-ae02-72175664e79e] Running
	I0207 21:53:05.778710   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:53:05.778710   11580 system_pods.go:89] "kube-scheduler-old-k8s-version-20220207213422-8704" [5973b363-469a-49c1-95fe-942330710096] Running
	I0207 21:53:05.778770   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:53:05.778770   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:53:05.778770   11580 system_pods.go:126] duration metric: took 56.7461108s to wait for k8s-apps to be running ...
	I0207 21:53:05.778928   11580 system_svc.go:44] waiting for kubelet service to be running ....
	I0207 21:53:05.789741   11580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 21:53:05.826213   11580 system_svc.go:56] duration metric: took 46.7814ms WaitForService to wait for kubelet.
	I0207 21:53:05.826737   11580 kubeadm.go:547] duration metric: took 1m8.447686s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0207 21:53:05.826737   11580 node_conditions.go:102] verifying NodePressure condition ...
	I0207 21:53:05.837378   11580 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0207 21:53:05.837378   11580 node_conditions.go:123] node cpu capacity is 16
	I0207 21:53:05.837378   11580 node_conditions.go:105] duration metric: took 10.6411ms to run NodePressure ...
	I0207 21:53:05.837378   11580 start.go:213] waiting for startup goroutines ...
	I0207 21:53:06.073970   11580 start.go:496] kubectl: 1.18.2, cluster: 1.16.0 (minor skew: 2)
	I0207 21:53:06.076808   11580 out.go:176] 
	W0207 21:53:06.077353   11580 out.go:241] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.16.0.
	I0207 21:53:06.080693   11580 out.go:176]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0207 21:53:06.084575   11580 out.go:176] * Done! kubectl is now configured to use "old-k8s-version-20220207213422-8704" cluster and "default" namespace by default
	I0207 21:53:05.529959   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2565698s)
	I0207 21:53:05.531587   12092 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220207214858-8704" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:53:05.532012   12092 kubeconfig.go:127] "newest-cni-20220207214858-8704" context is missing from C:\Users\jenkins.minikube3\minikube-integration\kubeconfig - will repair!
	I0207 21:53:05.533000   12092 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:53:05.563892   12092 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0207 21:53:05.595881   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:05.605843   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:05.647854   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:05.847966   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:05.859277   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:05.895772   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.048096   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.057209   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.103801   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.248504   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.257096   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.312848   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.448941   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.457634   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.501542   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.648189   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.659268   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.700131   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.848083   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.858569   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.897798   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.048754   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.054954   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.095967   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.248316   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.255277   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.298561   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.448170   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.458503   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.500346   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.648728   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.657589   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.695400   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.848158   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.857017   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.906826   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.048344   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.058531   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.097942   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.249948   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.261285   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.308426   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.449622   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.458790   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.498465   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.649676   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.657676   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.694952   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.694952   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.703857   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.747727   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.748265   12092 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0207 21:53:08.748265   12092 kubeadm.go:1066] stopping kube-system containers ...
	I0207 21:53:08.755406   12092 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 21:53:08.865555   12092 docker.go:438] Stopping containers: [b6a2dc2c489b 3ba6dbc5286c 2f5a838b3186 070a03c8bd2b 6f00f94a35a9 f2bf4b900019 4ac0d9369b0d 88824d43ae8a 5b4d76674230 e20e4197b8ba f671581d2d29 c692be2dbcb4 c1cc2af84422 e4eaa53a6243 6a752c653974 49b54541465a 656c3ef9ebc4 88893c4a87f5 6b8821375d93 aa6156ae792e]
	I0207 21:53:08.871802   12092 ssh_runner.go:195] Run: docker stop b6a2dc2c489b 3ba6dbc5286c 2f5a838b3186 070a03c8bd2b 6f00f94a35a9 f2bf4b900019 4ac0d9369b0d 88824d43ae8a 5b4d76674230 e20e4197b8ba f671581d2d29 c692be2dbcb4 c1cc2af84422 e4eaa53a6243 6a752c653974 49b54541465a 656c3ef9ebc4 88893c4a87f5 6b8821375d93 aa6156ae792e
	I0207 21:53:08.973505   12092 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0207 21:53:09.007507   12092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0207 21:53:09.070754   12092 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Feb  7 21:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb  7 21:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Feb  7 21:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb  7 21:51 /etc/kubernetes/scheduler.conf
	
	I0207 21:53:09.080114   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0207 21:53:09.111535   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0207 21:53:09.144106   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0207 21:53:09.176181   12092 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:09.186687   12092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0207 21:53:09.222307   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0207 21:53:09.248885   12092 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:09.255874   12092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0207 21:53:09.283358   12092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0207 21:53:09.312803   12092 kubeadm.go:677] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0207 21:53:09.312803   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:09.466045   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:10.566510   12092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0996125s)
	I0207 21:53:10.566510   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:10.873493   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:11.062971   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:11.335287   12092 api_server.go:51] waiting for apiserver process to appear ...
	I0207 21:53:11.346444   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:11.900445   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:12.401838   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:12.905595   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:13.398529   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:13.899974   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:14.401600   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:14.902699   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:15.401095   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:15.901362   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:16.400407   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:16.902910   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:17.401281   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:17.900681   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:18.134301   12092 api_server.go:71] duration metric: took 6.7989789s to wait for apiserver process to appear ...
	I0207 21:53:18.134301   12092 api_server.go:87] waiting for apiserver healthz status ...
	I0207 21:53:18.134301   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:18.141046   12092 api_server.go:256] stopped: https://127.0.0.1:50327/healthz: Get "https://127.0.0.1:50327/healthz": EOF
	I0207 21:53:18.641184   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:18.647019   12092 api_server.go:256] stopped: https://127.0.0.1:50327/healthz: Get "https://127.0.0.1:50327/healthz": EOF
	I0207 21:53:19.141857   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:24.044519   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0207 21:53:24.044519   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0207 21:53:24.142580   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:24.334424   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:24.334659   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:24.642472   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:24.730976   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:24.731124   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:25.141713   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:25.336133   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:25.336133   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:25.642280   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:25.741267   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:25.741342   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:26.142675   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:26.183809   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:26.183809   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:26.641588   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:26.750956   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:26.751034   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:27.141495   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:27.239257   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:27.239257   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:27.642810   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:27.830830   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 200:
	ok
	I0207 21:53:27.932194   12092 api_server.go:140] control plane version: v1.23.4-rc.0
	I0207 21:53:27.932194   12092 api_server.go:130] duration metric: took 9.797843s to wait for apiserver health ...
	I0207 21:53:27.932358   12092 cni.go:93] Creating CNI manager for ""
	I0207 21:53:27.932358   12092 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:53:27.932457   12092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0207 21:53:28.066248   12092 system_pods.go:59] 8 kube-system pods found
	I0207 21:53:28.066321   12092 system_pods.go:61] "coredns-64897985d-dx8qt" [1ae104b4-0012-42d0-8649-cad69e3edb18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0207 21:53:28.066321   12092 system_pods.go:61] "etcd-newest-cni-20220207214858-8704" [d3f0b527-3e63-4a70-bfa7-69de8f22b952] Running
	I0207 21:53:28.066321   12092 system_pods.go:61] "kube-apiserver-newest-cni-20220207214858-8704" [46181458-88de-4f57-857c-e0fb3238a62b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0207 21:53:28.066321   12092 system_pods.go:61] "kube-controller-manager-newest-cni-20220207214858-8704" [2624fa05-b365-4660-ba23-cd6be7b7cc3e] Running
	I0207 21:53:28.066321   12092 system_pods.go:61] "kube-proxy-fhm4g" [176adc08-9f04-43f1-91a1-f4ee9bd5568e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0207 21:53:28.066410   12092 system_pods.go:61] "kube-scheduler-newest-cni-20220207214858-8704" [f3a65188-d868-4d32-9375-d4bb240b9955] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0207 21:53:28.066410   12092 system_pods.go:61] "metrics-server-7f49dcbd7-hmd9h" [5f767c53-9428-45f1-b81f-05359a08115b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:53:28.066410   12092 system_pods.go:61] "storage-provisioner" [c4ec0fef-4e5a-4d6b-8ab7-e35dd9652808] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0207 21:53:28.066410   12092 system_pods.go:74] duration metric: took 133.9524ms to wait for pod list to return data ...
	I0207 21:53:28.066410   12092 node_conditions.go:102] verifying NodePressure condition ...
	I0207 21:53:28.150539   12092 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0207 21:53:28.150605   12092 node_conditions.go:123] node cpu capacity is 16
	I0207 21:53:28.150645   12092 node_conditions.go:105] duration metric: took 84.2351ms to run NodePressure ...
	I0207 21:53:28.150705   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:30.731007   12092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.5801938s)
	I0207 21:53:30.731007   12092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0207 21:53:30.931055   12092 ops.go:34] apiserver oom_adj: -16
	I0207 21:53:30.931209   12092 kubeadm.go:604] restartCluster took 26.7022436s
	I0207 21:53:30.931330   12092 kubeadm.go:392] StartCluster complete in 26.830712s
	I0207 21:53:30.931330   12092 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:53:30.931715   12092 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:53:30.933904   12092 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:53:31.035812   12092 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220207214858-8704" rescaled to 1
	I0207 21:53:31.036030   12092 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:53:31.036030   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0207 21:53:31.036030   12092 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0207 21:53:31.038583   12092 out.go:176] * Verifying Kubernetes components...
	I0207 21:53:31.036345   12092 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220207214858-8704"
	I0207 21:53:31.036345   12092 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220207214858-8704"
	I0207 21:53:31.039998   12092 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220207214858-8704"
	I0207 21:53:31.036345   12092 addons.go:65] Setting dashboard=true in profile "newest-cni-20220207214858-8704"
	W0207 21:53:31.039998   12092 addons.go:165] addon metrics-server should already be in state true
	I0207 21:53:31.036345   12092 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220207214858-8704"
	I0207 21:53:31.040294   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:31.037010   12092 config.go:176] Loaded profile config "newest-cni-20220207214858-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4-rc.0
	I0207 21:53:31.040478   12092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220207214858-8704"
	I0207 21:53:31.039901   12092 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220207214858-8704"
	I0207 21:53:31.040294   12092 addons.go:153] Setting addon dashboard=true in "newest-cni-20220207214858-8704"
	W0207 21:53:31.040635   12092 addons.go:165] addon storage-provisioner should already be in state true
	W0207 21:53:31.040635   12092 addons.go:165] addon dashboard should already be in state true
	I0207 21:53:31.040635   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:31.040635   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:31.059259   12092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 21:53:31.066242   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.066242   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.067749   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.068263   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.578427   12092 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0207 21:53:31.584423   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.756335   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.6885349s)
	I0207 21:53:32.759328   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.693077s)
	I0207 21:53:32.759328   12092 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0207 21:53:32.761327   12092 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0207 21:53:32.762326   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0207 21:53:32.762326   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0207 21:53:32.770342   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.7020706s)
	I0207 21:53:32.772326   12092 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0207 21:53:32.772326   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.773326   12092 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 21:53:32.773326   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0207 21:53:32.780329   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.788343   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.7220914s)
	I0207 21:53:32.791325   12092 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0207 21:53:32.791325   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0207 21:53:32.791325   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0207 21:53:32.803384   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.805325   12092 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220207214858-8704"
	W0207 21:53:32.805325   12092 addons.go:165] addon default-storageclass should already be in state true
	I0207 21:53:32.805325   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:32.818326   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:33.366148   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.7815798s)
	I0207 21:53:33.366148   12092 api_server.go:51] waiting for apiserver process to appear ...
	I0207 21:53:33.380571   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:33.458680   12092 api_server.go:71] duration metric: took 2.4226376s to wait for apiserver process to appear ...
	I0207 21:53:33.459679   12092 api_server.go:87] waiting for apiserver healthz status ...
	I0207 21:53:33.459679   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:33.543694   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 200:
	ok
	I0207 21:53:33.549688   12092 api_server.go:140] control plane version: v1.23.4-rc.0
	I0207 21:53:33.549688   12092 api_server.go:130] duration metric: took 90.0087ms to wait for apiserver health ...
	I0207 21:53:33.549688   12092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0207 21:53:33.640653   12092 system_pods.go:59] 8 kube-system pods found
	I0207 21:53:33.640653   12092 system_pods.go:61] "coredns-64897985d-dx8qt" [1ae104b4-0012-42d0-8649-cad69e3edb18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0207 21:53:33.640653   12092 system_pods.go:61] "etcd-newest-cni-20220207214858-8704" [d3f0b527-3e63-4a70-bfa7-69de8f22b952] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-apiserver-newest-cni-20220207214858-8704" [46181458-88de-4f57-857c-e0fb3238a62b] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-controller-manager-newest-cni-20220207214858-8704" [2624fa05-b365-4660-ba23-cd6be7b7cc3e] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-proxy-fhm4g" [176adc08-9f04-43f1-91a1-f4ee9bd5568e] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-scheduler-newest-cni-20220207214858-8704" [f3a65188-d868-4d32-9375-d4bb240b9955] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0207 21:53:33.640653   12092 system_pods.go:61] "metrics-server-7f49dcbd7-hmd9h" [5f767c53-9428-45f1-b81f-05359a08115b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:53:33.640653   12092 system_pods.go:61] "storage-provisioner" [c4ec0fef-4e5a-4d6b-8ab7-e35dd9652808] Running
	I0207 21:53:33.640653   12092 system_pods.go:74] duration metric: took 90.9649ms to wait for pod list to return data ...
	I0207 21:53:33.640653   12092 default_sa.go:34] waiting for default service account to be created ...
	I0207 21:53:33.649659   12092 default_sa.go:45] found service account: "default"
	I0207 21:53:33.649659   12092 default_sa.go:55] duration metric: took 9.0059ms for default service account to be created ...
	I0207 21:53:33.649659   12092 kubeadm.go:547] duration metric: took 2.613616s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0207 21:53:33.649659   12092 node_conditions.go:102] verifying NodePressure condition ...
	I0207 21:53:33.660459   12092 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0207 21:53:33.660510   12092 node_conditions.go:123] node cpu capacity is 16
	I0207 21:53:33.660657   12092 node_conditions.go:105] duration metric: took 10.9974ms to run NodePressure ...
	I0207 21:53:33.660714   12092 start.go:213] waiting for startup goroutines ...
	I0207 21:53:34.397704   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.6173673s)
	I0207 21:53:34.397704   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:34.405700   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.6333659s)
	I0207 21:53:34.405700   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:34.454307   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.6359727s)
	I0207 21:53:34.454307   12092 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0207 21:53:34.454307   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0207 21:53:34.464301   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:34.473346   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.6699527s)
	I0207 21:53:34.473346   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:34.729665   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0207 21:53:34.729765   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0207 21:53:34.750147   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 21:53:34.773491   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0207 21:53:34.773624   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0207 21:53:34.840296   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0207 21:53:34.840296   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0207 21:53:35.048894   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0207 21:53:35.049466   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0207 21:53:35.138401   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0207 21:53:35.138994   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0207 21:53:35.253124   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0207 21:53:35.253124   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0207 21:53:35.260903   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0207 21:53:35.260903   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0207 21:53:35.368241   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0207 21:53:35.368241   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0207 21:53:35.446716   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0207 21:53:35.486365   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0207 21:53:35.486455   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0207 21:53:35.646044   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0207 21:53:35.646044   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0207 21:53:35.853210   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0207 21:53:35.853300   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0207 21:53:35.947727   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.4834179s)
	I0207 21:53:35.947727   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:36.030033   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0207 21:53:36.030033   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0207 21:53:36.254045   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0207 21:53:36.549494   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0207 21:53:39.832923   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.0826985s)
	I0207 21:53:39.933783   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.4869993s)
	I0207 21:53:39.933783   12092 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220207214858-8704"
	I0207 21:53:40.832790   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.5786592s)
	I0207 21:53:40.832790   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.2832737s)
	I0207 21:53:40.836562   12092 out.go:176] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0207 21:53:40.836700   12092 addons.go:417] enableAddons completed in 9.8006192s
	I0207 21:53:41.108838   12092 start.go:496] kubectl: 1.18.2, cluster: 1.23.4-rc.0 (minor skew: 5)
	I0207 21:53:41.113007   12092 out.go:176] 
	W0207 21:53:41.113403   12092 out.go:241] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.4-rc.0.
	I0207 21:53:41.116657   12092 out.go:176]   - Want kubectl v1.23.4-rc.0? Try 'minikube kubectl -- get pods -A'
	I0207 21:53:41.118871   12092 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220207214858-8704" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-02-07 21:45:36 UTC, end at Mon 2022-02-07 21:53:57 UTC. --
	Feb 07 21:51:16 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:16.261573100Z" level=info msg="ignoring event" container=3ba6644035e80a8d36a643ae511c5b7ef5b0b624f277699213c50d1ce2658a6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:51:16 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:16.698548200Z" level=info msg="ignoring event" container=f5830bb53007aa7be5c4765c752c19e5f89f4f78e4af1af2e9918de8ade404da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:51:17 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:17.062198200Z" level=info msg="ignoring event" container=dd147702e86e6f6f59b5d8e1883cb1f4fd0a89223f3c4cf779d2ca3c82e09ad1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:51:17 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:17.375602800Z" level=info msg="ignoring event" container=587d7811e2e96af58cdb29939816249575a7ccf94d5a2a6a12f9f411c5507a23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:51:59 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:59.277849300Z" level=error msg="stream copy error: reading from a closed fifo"
	Feb 07 21:51:59 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:59.278715800Z" level=error msg="stream copy error: reading from a closed fifo"
	Feb 07 21:51:59 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:59.890502200Z" level=error msg="98ec4b0adb654678a5b43fe0fa245576846bb20cc2fb85577be4751d8b759595 cleanup: failed to delete container from containerd: no such container"
	Feb 07 21:52:10 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:10.002669900Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:10 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:10.002844200Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:10 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:10.014957800Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:11 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:11.465357400Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 21:52:11 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:11.608170500Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 21:52:28 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:28.186404800Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:28 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:28.186530500Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:28 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:28.215719200Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:30 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:30.180779200Z" level=info msg="ignoring event" container=e4d602b4d59293ac01c1a84338f807eceafc9b10590abe9375acd246b2409fa3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:52:31 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:31.614264400Z" level=info msg="ignoring event" container=c550ec152abeeff30b4d9b806cb5db52c1cebed690f5288c7dde5a28e6743c16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:52:50 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:50.187747600Z" level=info msg="ignoring event" container=cc91b93cba5368742c32b6c07ded587705800165fc825d5f1b6a719767e71315 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:57.709378100Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:57.709557100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:57.721996900Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:25 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:25.279377500Z" level=info msg="ignoring event" container=7247de69d44bed3976d35ba6dc8ba10942fed649986a45445397d342a938b06c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:40.709316200Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:40.709518000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:40.723078400Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	7247de69d44be       a90209bb39e3d       33 seconds ago       Exited              dashboard-metrics-scraper   3                   e895657da88ca
	60f4881418fd1       e1482a24335a6       About a minute ago   Running             kubernetes-dashboard        0                   c72cc9964cd1f
	a8ed79fbdc062       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   c73f8f93ca8f4
	a85fc6a2084b6       bf261d1579144       About a minute ago   Running             coredns                     0                   a0e2dff2130f1
	ec95e16fcc809       c21b0c7400f98       About a minute ago   Running             kube-proxy                  0                   ef7860df7145a
	afe896f092f1d       301ddc62b80b1       2 minutes ago        Running             kube-scheduler              0                   0cb1e1b733c50
	8ae07c05261a3       06a629a7e51cd       2 minutes ago        Running             kube-controller-manager     0                   bcf6997955a74
	cc8e4f4dc07cd       b305571ca60a5       2 minutes ago        Running             kube-apiserver              0                   7fbdc6ce27662
	380179822a91f       b2756210eeabf       2 minutes ago        Running             etcd                        0                   2c1104f4ac7fa
	
	* 
	* ==> coredns [a85fc6a2084b] <==
	* .:53
	2022-02-07T21:52:01.908Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-02-07T21:52:01.982Z [INFO] CoreDNS-1.6.2
	2022-02-07T21:52:01.982Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-02-07T21:52:27.938Z [INFO] plugin/reload: Running configuration MD5 = 034a4984a79adc08e57427d1bc08b68f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220207213422-8704
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220207213422-8704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb
	                    minikube.k8s.io/name=old-k8s-version-20220207213422-8704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_02_07T21_51_41_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Feb 2022 21:51:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Feb 2022 21:52:58 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Feb 2022 21:52:58 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Feb 2022 21:52:58 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Feb 2022 21:52:58 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220207213422-8704
	Capacity:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52646744Ki
	 pods:               110
	Allocatable:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52646744Ki
	 pods:               110
	System Info:
	 Machine ID:                 f0d9fc3b84d34ab4ba684459888f0938
	 System UUID:                f0d9fc3b84d34ab4ba684459888f0938
	 Boot ID:                    63de5e8a-b025-4a3e-80b6-1ee5f15fec4d
	 Kernel Version:             5.10.16.3-microsoft-standard-WSL2
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-5cdl2                                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m4s
	  kube-system                etcd-old-k8s-version-20220207213422-8704                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                kube-apiserver-old-k8s-version-20220207213422-8704             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                kube-controller-manager-old-k8s-version-20220207213422-8704    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                kube-proxy-9kjsq                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                kube-scheduler-old-k8s-version-20220207213422-8704             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                metrics-server-5b7b789f-nkphv                                  100m (0%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-jz2sz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kubernetes-dashboard       kubernetes-dashboard-766959b846-27bjs                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m36s)  kubelet, old-k8s-version-20220207213422-8704     Node old-k8s-version-20220207213422-8704 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m36s)  kubelet, old-k8s-version-20220207213422-8704     Node old-k8s-version-20220207213422-8704 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m36s)  kubelet, old-k8s-version-20220207213422-8704     Node old-k8s-version-20220207213422-8704 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                   kube-proxy, old-k8s-version-20220207213422-8704  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000199] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000382] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Feb 7 21:17] WSL2: Performing memory compaction.
	[Feb 7 21:18] WSL2: Performing memory compaction.
	[Feb 7 21:20] WSL2: Performing memory compaction.
	[Feb 7 21:21] WSL2: Performing memory compaction.
	[Feb 7 21:24] WSL2: Performing memory compaction.
	[Feb 7 21:25] WSL2: Performing memory compaction.
	[Feb 7 21:26] WSL2: Performing memory compaction.
	[Feb 7 21:28] WSL2: Performing memory compaction.
	[Feb 7 21:29] WSL2: Performing memory compaction.
	[Feb 7 21:30] WSL2: Performing memory compaction.
	[Feb 7 21:31] WSL2: Performing memory compaction.
	[Feb 7 21:32] WSL2: Performing memory compaction.
	[Feb 7 21:34] WSL2: Performing memory compaction.
	[Feb 7 21:35] WSL2: Performing memory compaction.
	[Feb 7 21:38] WSL2: Performing memory compaction.
	[Feb 7 21:45] WSL2: Performing memory compaction.
	[Feb 7 21:48] WSL2: Performing memory compaction.
	[Feb 7 21:49] WSL2: Performing memory compaction.
	[Feb 7 21:50] WSL2: Performing memory compaction.
	[Feb 7 21:52] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [380179822a91] <==
	* 2022-02-07 21:51:36.277959 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20220207213422-8704\" " with result "range_response_count:0 size:4" took too long (190.2189ms) to execute
	2022-02-07 21:51:36.391547 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20220207213422-8704\" " with result "range_response_count:1 size:3010" took too long (108.2902ms) to execute
	2022-02-07 21:51:36.391842 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:114" took too long (108.1925ms) to execute
	2022-02-07 21:51:36.391880 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-20220207213422-8704\" " with result "range_response_count:0 size:4" took too long (107.7159ms) to execute
	2022-02-07 21:51:36.391962 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:119" took too long (108ms) to execute
	2022-02-07 21:51:36.392077 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.81ms) to execute
	2022-02-07 21:51:36.392130 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (107.8673ms) to execute
	2022-02-07 21:51:55.979870 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:1152" took too long (100.0735ms) to execute
	2022-02-07 21:52:03.092211 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:0 size:5" took too long (102.3648ms) to execute
	2022-02-07 21:52:03.990149 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20220207213422-8704\" " with result "range_response_count:1 size:3387" took too long (100.8329ms) to execute
	2022-02-07 21:52:04.516220 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (127.6192ms) to execute
	2022-02-07 21:52:04.697498 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:1 size:481" took too long (114.9509ms) to execute
	2022-02-07 21:52:04.825561 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (131.7702ms) to execute
	2022-02-07 21:52:05.021323 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" " with result "range_response_count:0 size:5" took too long (103.5511ms) to execute
	2022-02-07 21:52:05.484899 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:1 size:547" took too long (101.3322ms) to execute
	2022-02-07 21:52:05.485172 W | etcdserver: read-only range request "key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" " with result "range_response_count:0 size:5" took too long (100.3297ms) to execute
	2022-02-07 21:52:06.000966 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (115.8329ms) to execute
	2022-02-07 21:52:06.001324 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989.16d19f835ba86acc\" " with result "range_response_count:1 size:695" took too long (119.0593ms) to execute
	2022-02-07 21:52:06.086597 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:1 size:547" took too long (103.2956ms) to execute
	2022-02-07 21:52:06.086870 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (101.341ms) to execute
	2022-02-07 21:52:06.478298 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-766959b846.16d19f8366d52a5c\" " with result "range_response_count:1 size:675" took too long (183.0937ms) to execute
	2022-02-07 21:52:07.187048 W | etcdserver: request "header:<ID:15638326214409946467 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:807 >> failure:<>>" with result "size:16" took too long (106.3223ms) to execute
	2022-02-07 21:52:07.583357 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20220207213422-8704\" " with result "range_response_count:1 size:3387" took too long (102.7764ms) to execute
	2022-02-07 21:52:15.278546 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (385.343ms) to execute
	2022-02-07 21:52:29.637723 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (334.2243ms) to execute
	
	* 
	* ==> kernel <==
	*  21:53:59 up  2:38,  0 users,  load average: 4.05, 5.30, 5.28
	Linux old-k8s-version-20220207213422-8704 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [cc8e4f4dc07c] <==
	* I0207 21:51:36.923846       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0207 21:51:36.932083       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0207 21:51:36.938438       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0207 21:51:36.938542       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0207 21:51:38.706388       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0207 21:51:38.985858       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0207 21:51:39.338988       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0207 21:51:39.340245       1 controller.go:606] quota admission added evaluator for: endpoints
	I0207 21:51:40.304132       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0207 21:51:41.017554       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0207 21:51:41.241808       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0207 21:51:42.904376       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0207 21:51:55.684436       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0207 21:51:55.991883       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0207 21:51:56.080610       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0207 21:52:09.106686       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 21:52:09.108136       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 21:52:09.178663       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 21:52:09.178794       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0207 21:53:09.183002       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 21:53:09.185877       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 21:53:09.186544       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 21:53:09.186714       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8ae07c05261a] <==
	* I0207 21:52:05.783573       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:05.879595       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-5b7b789f", UID:"023fdc8f-9b00-401c-a8d3-f31136d70fad", APIVersion:"apps/v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-5b7b789f-nkphv
	E0207 21:52:05.881799       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:05.881799       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:05.881819       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:05.882478       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:05.982034       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:05.982093       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.003292       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.003738       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.090415       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.090477       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.090485       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.090527       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.180549       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.180656       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:07.195431       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-766959b846-27bjs
	I0207 21:52:07.389247       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-jz2sz
	E0207 21:52:26.632896       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 21:52:28.081541       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 21:52:56.887397       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 21:53:00.094530       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 21:53:27.142277       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 21:53:32.100896       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 21:53:57.403863       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [ec95e16fcc80] <==
	* W0207 21:52:00.479224       1 proxier.go:584] Failed to read file /lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.482359       1 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.484777       1 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.486960       1 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.490107       1 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.494636       1 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.587897       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0207 21:52:00.623652       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0207 21:52:00.623799       1 server_others.go:149] Using iptables Proxier.
	I0207 21:52:00.626264       1 server.go:529] Version: v1.16.0
	I0207 21:52:00.628350       1 config.go:131] Starting endpoints config controller
	I0207 21:52:00.628643       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0207 21:52:00.628861       1 config.go:313] Starting service config controller
	I0207 21:52:00.629011       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0207 21:52:00.729500       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0207 21:52:00.729642       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [afe896f092f1] <==
	* W0207 21:51:36.180603       1 authentication.go:79] Authentication is disabled
	I0207 21:51:36.180649       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0207 21:51:36.181984       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0207 21:51:36.286841       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 21:51:36.286848       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 21:51:36.286911       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 21:51:36.286956       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 21:51:36.286960       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 21:51:36.287554       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:36.287896       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:36.288003       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 21:51:36.288872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 21:51:36.289232       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0207 21:51:36.381314       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 21:51:37.289261       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 21:51:37.291720       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 21:51:37.380484       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 21:51:37.383892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 21:51:37.384911       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 21:51:37.385754       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 21:51:37.389743       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:37.391425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:37.391816       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 21:51:37.393133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 21:51:37.393190       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-02-07 21:45:36 UTC, end at Mon 2022-02-07 21:54:00 UTC. --
	Feb 07 21:52:34 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:34.882142    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:52:42 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:42.637976    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:52:50 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:52:50.255802    5553 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod8c81d472-4437-4f06-9a37-da1d85d8b073/cc91b93cba5368742c32b6c07ded587705800165fc825d5f1b6a719767e71315": none of the resources are being tracked.
	Feb 07 21:52:50 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:52:50.517107    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:52:50 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:50.531755    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:52:51 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:52:51.555221    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:52:54 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:54.880819    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:57.723804    5553 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:57.724000    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:57.724283    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:57.724353    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:09 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:09.634053    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:12 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:12.637252    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:53:25 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:53:25.124265    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:53:25 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:25.637010    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:53:26 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:53:26.213468    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:53:26 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:26.227658    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:27 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:53:27.244435    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:53:34 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:34.880965    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.724329    5553 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.725681    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.725774    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.725810    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:47 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:47.633514    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:51 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:51.640099    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [60f4881418fd] <==
	* 2022/02/07 21:52:11 Starting overwatch
	2022/02/07 21:52:11 Using namespace: kubernetes-dashboard
	2022/02/07 21:52:11 Using in-cluster config to connect to apiserver
	2022/02/07 21:52:11 Using secret token for csrf signing
	2022/02/07 21:52:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/02/07 21:52:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/02/07 21:52:11 Successful initial request to the apiserver, version: v1.16.0
	2022/02/07 21:52:11 Generating JWE encryption key
	2022/02/07 21:52:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/02/07 21:52:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/02/07 21:52:12 Initializing JWE encryption key from synchronized object
	2022/02/07 21:52:12 Creating in-cluster Sidecar client
	2022/02/07 21:52:13 Serving insecurely on HTTP port: 9090
	2022/02/07 21:52:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 21:52:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 21:53:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 21:53:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [a8ed79fbdc06] <==
	* I0207 21:52:08.380934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0207 21:52:08.582470       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0207 21:52:08.583083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0207 21:52:08.679909       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0207 21:52:08.680194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df1c4e9a-abf2-4450-a251-b193d27d1266", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220207213422-8704_e02cf656-cd1c-466d-b4a8-c6ae790907b4 became leader
	I0207 21:52:08.680316       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207213422-8704_e02cf656-cd1c-466d-b4a8-c6ae790907b4!
	I0207 21:52:08.781027       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207213422-8704_e02cf656-cd1c-466d-b4a8-c6ae790907b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704
E0207 21:54:05.148754    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704: (7.9370678s)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-5b7b789f-nkphv
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 describe pod metrics-server-5b7b789f-nkphv
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220207213422-8704 describe pod metrics-server-5b7b789f-nkphv: exit status 1 (368.8732ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5b7b789f-nkphv" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220207213422-8704 describe pod metrics-server-5b7b789f-nkphv: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207213422-8704
helpers_test.go:232: (dbg) Done: docker inspect old-k8s-version-20220207213422-8704: (1.3308175s)
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207213422-8704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9",
	        "Created": "2022-02-07T21:41:46.3675129Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-07T21:45:35.5040323Z",
	            "FinishedAt": "2022-02-07T21:45:11.4209185Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/hostname",
	        "HostsPath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/hosts",
	        "LogPath": "/var/lib/docker/containers/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9/d91c432d30a605c5d74186777e3b693fbeefe9b8d9cf00a48d4e25d474172da9-json.log",
	        "Name": "/old-k8s-version-20220207213422-8704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207213422-8704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207213422-8704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3-init/diff:/var/lib/docker/overlay2/75e1ee3c034aacc956b8c3ecc7ab61ac5c38e660082589ceb37efd240a771cc5/diff:/var/lib/docker/overlay2/189fe0ac50cbe021b1f58d4d3552848c814165ab41c880cc414a3d772ecf8a17/diff:/var/lib/docker/overlay2/1825c50829a945a491708c366e3adc3e6d891ec2fcbd7f13b41f06c64baa55d9/diff:/var/lib/docker/overlay2/0b9358d8d7de1369e9019714824c8f1007b6c08b3ebf296b7b1288610816a2ce/diff:/var/lib/docker/overlay2/689f6514ad269d91cd1861629d1949b077031e825417ef4dfb5621888699407b/diff:/var/lib/docker/overlay2/8dff862a1c6a46807e22567df5955e49a8aa3d0a1f2ad45ca46f2ab5374556fe/diff:/var/lib/docker/overlay2/ee466d69c85d056ef8068fd5652d1a05e5ca08f4f2880d8156cd2f212ceaaaa6/diff:/var/lib/docker/overlay2/86890d1d8e6826b123ee9ec4c463f6f91ad837f07b7147e0c6ef8c7e17b601da/diff:/var/lib/docker/overlay2/b657d041c7bdb28ab2fd58a8e3615ec574e7e5fcace80e88f630332a1ff67ff7/diff:/var/lib/docker/overlay2/4339b0
c7baf085cb3dc647fb19cd967f89fdd4e316e2bc806815c81fc17efc59/diff:/var/lib/docker/overlay2/36993c24ec6e3eb908331a1c00b702e3326415b7124d4d1788747ba328eb6e2a/diff:/var/lib/docker/overlay2/5b68d569c7973aeabb60b4d744a1b86cc3ebb8b284e55bbbe33576e97e3ac021/diff:/var/lib/docker/overlay2/57b6ab85187eac783753b7bdcafb75e9d26d3e9d22b614bbfa42fbf4a6e879f8/diff:/var/lib/docker/overlay2/e5f2f9b80a695305ffbe047f65db35cc276ac41f987ec84a5742b3769918cb79/diff:/var/lib/docker/overlay2/06d7d08e9ebfbe3202537757cc03ccaa87b749e7dd8354ae1978c44a1b14a690/diff:/var/lib/docker/overlay2/44604b9a5d1c918e1d3ebe374cc5b01af83b10aef4cbf54e72d7fd0b7be60646/diff:/var/lib/docker/overlay2/9d28038d0516655f0a12f3ec5220089de0a54540a27220e4f412dd3acc577f9b/diff:/var/lib/docker/overlay2/ec704366d20c2f84ce0d53c1b278507dc9cc66331cba15d90521a96d118d45af/diff:/var/lib/docker/overlay2/32b5b8eb800bf64445a63842604512878f22712d00a869b2104a1b528d6e8010/diff:/var/lib/docker/overlay2/6ff5152a44a5b0fd36c63aa1c7199ff420477113981a2dd750c29f82e1509669/diff:/var/lib/d
ocker/overlay2/b42f3edd75dd995daac9924998fafd7fe1b919f222b8185a3dfeef9a762660c7/diff:/var/lib/docker/overlay2/3cd19c2de3ea2cc271124c2c82db46bf5f550625dd02a5cde5c517af93c73caa/diff:/var/lib/docker/overlay2/b41830a6d20150650c5fb37bb60e7c06147734911fda7300a739cd023bb4789a/diff:/var/lib/docker/overlay2/925bf7a180aeb21aee1f13bf31ccc1f05a642fd383aabb499148885dcac5cfeb/diff:/var/lib/docker/overlay2/a5ec93ff5dc3e9d4a9975d8f1176019d102f9e8c319a4d5016f842be26bb5671/diff:/var/lib/docker/overlay2/37e01c18dc12ba0b9bd89093b244ef29456df1fb30fc4a8c3e5596b7b56ada0a/diff:/var/lib/docker/overlay2/6ce0b6587d0750a0ef5383637b91df31d4c1619e3a494b84c8714c5beebf1dbc/diff:/var/lib/docker/overlay2/8f4e875a02344a4926d7f5ad052151ca0eef0364a189b7ca60ebb338213d7c8e/diff:/var/lib/docker/overlay2/2790936ada4be199505c2cab1447b90a25076c4d2cbceadeb4a52026c71b9c60/diff:/var/lib/docker/overlay2/231fcc4021464c7f510cca7eecaabc94216fcc70cb62f97465c0d546064b25b8/diff:/var/lib/docker/overlay2/30845ecf75e8fd0fa04703004fc686bb8aff8eabe9437f4e7a1096a5bca
060a3/diff:/var/lib/docker/overlay2/3ae1acee47e31df704424e5e9dbaed72199c1cb3a318825a84cc9d2f08f1d807/diff:/var/lib/docker/overlay2/f9fe697b5ffab06c3cc31c3e2b7d924c32d4f0f4ee8fd29cb5e2b46e586b4d4d/diff:/var/lib/docker/overlay2/68afa844b9fe835f1997b14fe394dac6238ee6a39aa0abfc34a93c062d58f819/diff:/var/lib/docker/overlay2/94b84dda68e5a3dbf4319437e5d026f2c5c705496ca2d9922f7e865879146b56/diff:/var/lib/docker/overlay2/f133dd3fe2bf48f8bd9dced36254f4cc973685d2ddde9ee6e0f2467ea7d34592/diff:/var/lib/docker/overlay2/dafd5505dd817285a71ea03b36fb5684a0c844441c07c909d1e6c47b874b33d4/diff:/var/lib/docker/overlay2/c714cab2096f6325d72b4b73673c329c5db40f169c0d6d5d034bf8af87b90983/diff:/var/lib/docker/overlay2/ea71191eaaa01123105da39dc897cb6e11c028c8a2e91dc62ff85bb5e0fb1884/diff:/var/lib/docker/overlay2/6c554fb0a2463d3ef05cdb7858f9788626b5c72dbb4ea5a0431ec665de90dc74/diff:/var/lib/docker/overlay2/01e92d0b67f2be5d7d6ba3f84ffac8ad1e0c516b03b45346070503f62de32e5a/diff:/var/lib/docker/overlay2/f5f6f40c4df999e1ae2e5733fa6aad1cf8963e
bd6e2b9f849164ca5c149a4262/diff:/var/lib/docker/overlay2/e1eb2f89916ebfdb9a8d5aacfd9618edc370a018de0114d193b6069979c02aa7/diff:/var/lib/docker/overlay2/0e35d26329f1b7cf4e1b2bb03588192d3ea37764eab1ccc5a598db2164c932d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4806fe536732d2c3e4e08d67a58b29fbc6b01ed8bfec2810f1405d6a8f5c09e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207213422-8704",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207213422-8704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207213422-8704",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207213422-8704",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207213422-8704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5996e08974d441703c051b7594bb97555ad737d892669f602b09532b17f8ea5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49999"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49996"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49997"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49998"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5996e08974d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207213422-8704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d91c432d30a6",
	                        "old-k8s-version-20220207213422-8704"
	                    ],
	                    "NetworkID": "ddff198b4b6acdf48ebd04350383f2ae95e5423309282253f56b82f0624e3210",
	                    "EndpointID": "705ccdbff0911016e67f4bba958ec8f4a31a360bcce5f09543d0d5dd475e45a8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704
E0207 21:54:13.521262    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704: (7.5787702s)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20220207213422-8704 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-20220207213422-8704 logs -n 25: (8.3205206s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |       User        | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| pause   | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:06 GMT | Mon, 07 Feb 2022 21:47:13 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:23 GMT | Mon, 07 Feb 2022 21:47:31 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:29 GMT | Mon, 07 Feb 2022 21:47:36 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| pause   | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:31 GMT | Mon, 07 Feb 2022 21:47:38 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| unpause | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:54 GMT | Mon, 07 Feb 2022 21:48:02 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:47:52 GMT | Mon, 07 Feb 2022 21:48:34 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220207213455-8704                | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:48:35 GMT | Mon, 07 Feb 2022 21:48:58 GMT |
	|         | embed-certs-20220207213455-8704                            |                                                |                   |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:41:32 GMT | Mon, 07 Feb 2022 21:48:58 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:48:29 GMT | Mon, 07 Feb 2022 21:49:02 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220207213438-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:03 GMT | Mon, 07 Feb 2022 21:49:18 GMT |
	|         | no-preload-20220207213438-8704                             |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:19 GMT | Mon, 07 Feb 2022 21:49:27 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:27 GMT | Mon, 07 Feb 2022 21:49:34 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:49:49 GMT | Mon, 07 Feb 2022 21:50:00 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:50:16 GMT | Mon, 07 Feb 2022 21:50:53 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20220207213739-8704 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:50:54 GMT | Mon, 07 Feb 2022 21:51:07 GMT |
	|         | default-k8s-different-port-20220207213739-8704             |                                                |                   |         |                               |                               |
	| start   | -p newest-cni-20220207214858-8704 --memory=2200            | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:48:58 GMT | Mon, 07 Feb 2022 21:51:49 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.23.4-rc.0          |                                                |                   |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:51:49 GMT | Mon, 07 Feb 2022 21:51:55 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:51:56 GMT | Mon, 07 Feb 2022 21:52:19 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |                   |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:52:22 GMT | Mon, 07 Feb 2022 21:52:25 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20220207213422-8704            | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:45:21 GMT | Mon, 07 Feb 2022 21:53:06 GMT |
	|         | old-k8s-version-20220207213422-8704                        |                                                |                   |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |                   |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |                   |         |                               |                               |
	|         | --keep-context=false                                       |                                                |                   |         |                               |                               |
	|         | --driver=docker                                            |                                                |                   |         |                               |                               |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20220207213422-8704            | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:53:24 GMT | Mon, 07 Feb 2022 21:53:32 GMT |
	|         | old-k8s-version-20220207213422-8704                        |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| start   | -p newest-cni-20220207214858-8704 --memory=2200            | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:52:25 GMT | Mon, 07 Feb 2022 21:53:41 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.23.4-rc.0          |                                                |                   |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:53:49 GMT | Mon, 07 Feb 2022 21:53:57 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                               |                               |
	| -p      | old-k8s-version-20220207213422-8704                        | old-k8s-version-20220207213422-8704            | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:53:50 GMT | Mon, 07 Feb 2022 21:54:00 GMT |
	|         | logs -n 25                                                 |                                                |                   |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220207214858-8704                 | minikube3\jenkins | v1.25.1 | Mon, 07 Feb 2022 21:53:57 GMT | Mon, 07 Feb 2022 21:54:06 GMT |
	|         | newest-cni-20220207214858-8704                             |                                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 21:52:25
	Running on machine: minikube3
	Binary: Built with gc go1.17.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 21:52:25.612740   12092 out.go:297] Setting OutFile to fd 1840 ...
	I0207 21:52:25.681279   12092 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:52:25.681279   12092 out.go:310] Setting ErrFile to fd 1916...
	I0207 21:52:25.681279   12092 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 21:52:25.693347   12092 out.go:304] Setting JSON to false
	I0207 21:52:25.696967   12092 start.go:112] hostinfo: {"hostname":"minikube3","uptime":438364,"bootTime":1643832381,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 21:52:25.697107   12092 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 21:52:25.700834   12092 out.go:176] * [newest-cni-20220207214858-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 21:52:25.701396   12092 notify.go:174] Checking for updates...
	I0207 21:52:25.704330   12092 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:52:25.706886   12092 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 21:52:25.709435   12092 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 21:52:23.140074   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:23.140074   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:23.140074   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:23.140074   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:23.140074   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:23.140074   11580 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:25.711461   12092 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 21:52:25.712936   12092 config.go:176] Loaded profile config "newest-cni-20220207214858-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4-rc.0
	I0207 21:52:25.713657   12092 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 21:52:28.404636   12092 docker.go:132] docker version: linux-20.10.12
	I0207 21:52:28.412122   12092 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:52:30.643608   12092 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.2314744s)
	I0207 21:52:30.644716   12092 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:52:29.5628527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:52:26.433917   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:26.434131   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:26.434131   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:26.434131   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:26.434131   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:26.434131   11580 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:30.553700   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:30.553700   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:30.553700   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:30.553700   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:30.553700   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:30.553700   11580 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:30.649519   12092 out.go:176] * Using the docker driver based on existing profile
	I0207 21:52:30.650138   12092 start.go:281] selected driver: docker
	I0207 21:52:30.650138   12092 start.go:798] validating driver "docker" against &{Name:newest-cni-20220207214858-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fak
e.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:52:30.650279   12092 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 21:52:30.773816   12092 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 21:52:32.908476   12092 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.1346484s)
	I0207 21:52:32.909102   12092 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:52 SystemTime:2022-02-07 21:52:31.8993857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 21:52:32.909558   12092 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 21:52:32.909829   12092 start_flags.go:850] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0207 21:52:32.909829   12092 cni.go:93] Creating CNI manager for ""
	I0207 21:52:32.909829   12092 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:52:32.909829   12092 start_flags.go:302] config:
	{Name:newest-cni-20220207214858-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:52:32.915087   12092 out.go:176] * Starting control plane node newest-cni-20220207214858-8704 in cluster newest-cni-20220207214858-8704
	I0207 21:52:32.915166   12092 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 21:52:32.917772   12092 out.go:176] * Pulling base image ...
	I0207 21:52:32.917772   12092 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 21:52:32.918461   12092 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 21:52:32.918461   12092 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4
	I0207 21:52:32.918461   12092 cache.go:57] Caching tarball of preloaded images
	I0207 21:52:32.918461   12092 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 21:52:32.919238   12092 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4-rc.0 on docker
	I0207 21:52:32.919238   12092 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\config.json ...
	I0207 21:52:34.137225   12092 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 21:52:34.137225   12092 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 21:52:34.137344   12092 cache.go:208] Successfully downloaded all kic artifacts
	I0207 21:52:34.137415   12092 start.go:313] acquiring machines lock for newest-cni-20220207214858-8704: {Name:mk8bddd86d66d3fbf3b41ac62ecc647889d08fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 21:52:34.137744   12092 start.go:317] acquired machines lock for "newest-cni-20220207214858-8704" in 197.6µs
	I0207 21:52:34.137932   12092 start.go:93] Skipping create...Using existing machine configuration
	I0207 21:52:34.137974   12092 fix.go:55] fixHost starting: 
	I0207 21:52:34.153034   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:52:35.374078   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.2209656s)
	I0207 21:52:35.374152   12092 fix.go:108] recreateIfNeeded on newest-cni-20220207214858-8704: state=Stopped err=<nil>
	W0207 21:52:35.374152   12092 fix.go:134] unexpected machine state, will restart: <nil>
	I0207 21:52:35.377655   12092 out.go:176] * Restarting existing docker container for "newest-cni-20220207214858-8704" ...
	I0207 21:52:35.382901   12092 cli_runner.go:133] Run: docker start newest-cni-20220207214858-8704
	I0207 21:52:39.750086   12092 cli_runner.go:186] Completed: docker start newest-cni-20220207214858-8704: (4.3670036s)
	I0207 21:52:39.757156   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:52:36.967785   11580 system_pods.go:86] 4 kube-system pods found
	I0207 21:52:36.967785   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:36.967785   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:36.967785   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:36.967785   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:36.967785   11580 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0207 21:52:41.105203   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.3479631s)
	I0207 21:52:41.105319   12092 kic.go:420] container "newest-cni-20220207214858-8704" state is running.
	I0207 21:52:41.117161   12092 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704
	I0207 21:52:42.352892   12092 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704: (1.2357245s)
	I0207 21:52:42.352892   12092 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\config.json ...
	I0207 21:52:42.355528   12092 machine.go:88] provisioning docker machine ...
	I0207 21:52:42.355528   12092 ubuntu.go:169] provisioning hostname "newest-cni-20220207214858-8704"
	I0207 21:52:42.364296   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:43.596951   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2325634s)
	I0207 21:52:43.600405   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:43.600894   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:43.600894   12092 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220207214858-8704 && echo "newest-cni-20220207214858-8704" | sudo tee /etc/hostname
	I0207 21:52:43.833045   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20220207214858-8704
	
	I0207 21:52:43.840974   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:45.088645   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2476645s)
	I0207 21:52:45.092526   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:45.092890   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:45.092959   12092 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220207214858-8704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220207214858-8704/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220207214858-8704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 21:52:45.301995   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 21:52:45.301995   12092 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0207 21:52:45.302945   12092 ubuntu.go:177] setting up certificates
	I0207 21:52:45.302945   12092 provision.go:83] configureAuth start
	I0207 21:52:45.308923   12092 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704
	I0207 21:52:43.043473   11580 system_pods.go:86] 6 kube-system pods found
	I0207 21:52:43.044025   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:43.044025   11580 system_pods.go:89] "kube-apiserver-old-k8s-version-20220207213422-8704" [3bbc6549-ecd2-4235-9ca4-c3f9ab75d868] Pending
	I0207 21:52:43.044025   11580 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220207213422-8704" [95e98028-a3f7-4bea-ae02-72175664e79e] Running
	I0207 21:52:43.044025   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:43.044133   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:43.044133   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:43.044172   11580 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-scheduler
	I0207 21:52:46.556865   12092 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704: (1.2478331s)
	I0207 21:52:46.557001   12092 provision.go:138] copyHostCerts
	I0207 21:52:46.557123   12092 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0207 21:52:46.557123   12092 exec_runner.go:207] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0207 21:52:46.557727   12092 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0207 21:52:46.558574   12092 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0207 21:52:46.558574   12092 exec_runner.go:207] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0207 21:52:46.559284   12092 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0207 21:52:46.560024   12092 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0207 21:52:46.560024   12092 exec_runner.go:207] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0207 21:52:46.560655   12092 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0207 21:52:46.561611   12092 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20220207214858-8704 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220207214858-8704]
	I0207 21:52:46.703644   12092 provision.go:172] copyRemoteCerts
	I0207 21:52:46.711687   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 21:52:46.715628   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:47.929502   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2138001s)
	I0207 21:52:47.929779   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:48.025178   12092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3134843s)
	I0207 21:52:48.026076   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	I0207 21:52:48.089621   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0207 21:52:48.157862   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0207 21:52:48.213126   12092 provision.go:86] duration metric: configureAuth took 2.9101205s
	I0207 21:52:48.213260   12092 ubuntu.go:193] setting minikube options for container-runtime
	I0207 21:52:48.213880   12092 config.go:176] Loaded profile config "newest-cni-20220207214858-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4-rc.0
	I0207 21:52:48.220584   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:49.434272   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2135513s)
	I0207 21:52:49.437504   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:49.437568   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:49.437568   12092 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0207 21:52:49.635937   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0207 21:52:49.635937   12092 ubuntu.go:71] root file system type: overlay
	I0207 21:52:49.635937   12092 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0207 21:52:49.642712   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:50.917706   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2749875s)
	I0207 21:52:50.920701   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:50.921402   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:50.921402   12092 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0207 21:52:51.164094   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0207 21:52:51.170890   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:52.385373   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2144196s)
	I0207 21:52:52.388813   12092 main.go:130] libmachine: Using SSH client type: native
	I0207 21:52:52.389305   12092 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0xccb900] 0xcce7c0 <nil>  [] 0s} 127.0.0.1 50323 <nil> <nil>}
	I0207 21:52:52.389355   12092 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0207 21:52:52.608492   12092 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 21:52:52.608633   12092 machine.go:91] provisioned docker machine in 10.2530518s
	I0207 21:52:52.608633   12092 start.go:267] post-start starting for "newest-cni-20220207214858-8704" (driver="docker")
	I0207 21:52:52.608633   12092 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0207 21:52:52.618322   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0207 21:52:52.624142   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:53.841626   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2174771s)
	I0207 21:52:53.841954   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:53.995206   12092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3768775s)
	I0207 21:52:54.004167   12092 ssh_runner.go:195] Run: cat /etc/os-release
	I0207 21:52:54.018264   12092 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0207 21:52:54.018264   12092 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0207 21:52:54.018264   12092 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0207 21:52:54.018264   12092 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0207 21:52:54.018264   12092 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0207 21:52:54.018264   12092 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0207 21:52:54.019983   12092 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem -> 87042.pem in /etc/ssl/certs
	I0207 21:52:54.028456   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0207 21:52:54.057504   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem --> /etc/ssl/certs/87042.pem (1708 bytes)
	I0207 21:52:54.117720   12092 start.go:270] post-start completed in 1.5090794s
	I0207 21:52:54.126741   12092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 21:52:54.131278   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:55.347957   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2165774s)
	I0207 21:52:55.348187   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:53.564458   11580 system_pods.go:86] 7 kube-system pods found
	I0207 21:52:53.564579   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:52:53.564579   11580 system_pods.go:89] "kube-apiserver-old-k8s-version-20220207213422-8704" [3bbc6549-ecd2-4235-9ca4-c3f9ab75d868] Running
	I0207 21:52:53.564579   11580 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220207213422-8704" [95e98028-a3f7-4bea-ae02-72175664e79e] Running
	I0207 21:52:53.564579   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:52:53.564665   11580 system_pods.go:89] "kube-scheduler-old-k8s-version-20220207213422-8704" [5973b363-469a-49c1-95fe-942330710096] Running
	I0207 21:52:53.564684   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:52:53.564718   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:52:53.564718   11580 retry.go:31] will retry after 12.194240946s: missing components: etcd
	I0207 21:52:55.497090   12092 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3703418s)
	I0207 21:52:55.497090   12092 fix.go:57] fixHost completed within 21.3590062s
	I0207 21:52:55.497090   12092 start.go:80] releasing machines lock for "newest-cni-20220207214858-8704", held for 21.359237s
	I0207 21:52:55.503412   12092 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704
	I0207 21:52:56.733077   12092 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220207214858-8704: (1.2296593s)
	I0207 21:52:56.734541   12092 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0207 21:52:56.742177   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:56.742348   12092 ssh_runner.go:195] Run: systemctl --version
	I0207 21:52:56.748583   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:52:58.032618   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2904342s)
	I0207 21:52:58.032812   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2842223s)
	I0207 21:52:58.032812   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:58.032812   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:52:58.285602   12092 ssh_runner.go:235] Completed: systemctl --version: (1.5432468s)
	I0207 21:52:58.285602   12092 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5509939s)
	I0207 21:52:58.294433   12092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0207 21:52:58.338701   12092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 21:52:58.373766   12092 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0207 21:52:58.382656   12092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0207 21:52:58.414235   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0207 21:52:58.469447   12092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0207 21:52:58.667288   12092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0207 21:52:58.825486   12092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 21:52:58.873509   12092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0207 21:52:59.030780   12092 ssh_runner.go:195] Run: sudo systemctl start docker
	I0207 21:52:59.068784   12092 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 21:52:59.224982   12092 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 21:52:59.328538   12092 out.go:203] * Preparing Kubernetes v1.23.4-rc.0 on Docker 20.10.12 ...
	I0207 21:52:59.334389   12092 cli_runner.go:133] Run: docker exec -t newest-cni-20220207214858-8704 dig +short host.docker.internal
	I0207 21:53:01.013882   12092 cli_runner.go:186] Completed: docker exec -t newest-cni-20220207214858-8704 dig +short host.docker.internal: (1.6794845s)
	I0207 21:53:01.014108   12092 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0207 21:53:01.021099   12092 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0207 21:53:01.037417   12092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 21:53:01.076350   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:02.309071   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2327146s)
	I0207 21:53:02.312133   12092 out.go:176]   - kubelet.network-plugin=cni
	I0207 21:53:02.314914   12092 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0207 21:53:02.317575   12092 out.go:176]   - kubelet.housekeeping-interval=5m
	I0207 21:53:02.317742   12092 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 21:53:02.322051   12092 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 21:53:02.403000   12092 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 21:53:02.403000   12092 docker.go:537] Images already preloaded, skipping extraction
	I0207 21:53:02.409668   12092 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 21:53:02.486517   12092 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 21:53:02.486627   12092 cache_images.go:84] Images are preloaded, skipping loading
	I0207 21:53:02.493907   12092 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0207 21:53:02.728787   12092 cni.go:93] Creating CNI manager for ""
	I0207 21:53:02.728891   12092 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:53:02.728980   12092 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0207 21:53:02.729061   12092 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.4-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220207214858-8704 NodeName:newest-cni-20220207214858-8704 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:f
alse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0207 21:53:02.729460   12092 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220207214858-8704"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0207 21:53:02.729549   12092 kubeadm.go:935] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220207214858-8704 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0207 21:53:02.739914   12092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0
	I0207 21:53:02.770163   12092 binaries.go:44] Found k8s binaries, skipping transfer
	I0207 21:53:02.777986   12092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0207 21:53:02.805729   12092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (446 bytes)
	I0207 21:53:02.848621   12092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0207 21:53:02.895865   12092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2193 bytes)
	I0207 21:53:02.943961   12092 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0207 21:53:02.958528   12092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 21:53:02.982842   12092 certs.go:54] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704 for IP: 192.168.58.2
	I0207 21:53:02.983397   12092 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0207 21:53:02.983837   12092 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0207 21:53:02.984382   12092 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\client.key
	I0207 21:53:02.984783   12092 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\apiserver.key.cee25041
	I0207 21:53:02.985086   12092 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\proxy-client.key
	I0207 21:53:02.986340   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\8704.pem (1338 bytes)
	W0207 21:53:02.986693   12092 certs.go:384] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\8704_empty.pem, impossibly tiny 0 bytes
	I0207 21:53:02.986693   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0207 21:53:02.986693   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0207 21:53:02.986693   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0207 21:53:02.987559   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0207 21:53:02.988157   12092 certs.go:388] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem (1708 bytes)
	I0207 21:53:02.989465   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0207 21:53:03.046310   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0207 21:53:03.125464   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0207 21:53:03.183649   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-20220207214858-8704\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0207 21:53:03.241576   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0207 21:53:03.314572   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0207 21:53:03.379249   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0207 21:53:03.437714   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0207 21:53:03.494072   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\8704.pem --> /usr/share/ca-certificates/8704.pem (1338 bytes)
	I0207 21:53:03.554080   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\87042.pem --> /usr/share/ca-certificates/87042.pem (1708 bytes)
	I0207 21:53:03.609098   12092 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0207 21:53:03.664807   12092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0207 21:53:03.722757   12092 ssh_runner.go:195] Run: openssl version
	I0207 21:53:03.748843   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8704.pem && ln -fs /usr/share/ca-certificates/8704.pem /etc/ssl/certs/8704.pem"
	I0207 21:53:03.780365   12092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8704.pem
	I0207 21:53:03.790364   12092 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  7 19:41 /usr/share/ca-certificates/8704.pem
	I0207 21:53:03.796358   12092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8704.pem
	I0207 21:53:03.829753   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8704.pem /etc/ssl/certs/51391683.0"
	I0207 21:53:03.864247   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/87042.pem && ln -fs /usr/share/ca-certificates/87042.pem /etc/ssl/certs/87042.pem"
	I0207 21:53:03.897877   12092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/87042.pem
	I0207 21:53:03.915153   12092 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  7 19:41 /usr/share/ca-certificates/87042.pem
	I0207 21:53:03.923897   12092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/87042.pem
	I0207 21:53:03.948331   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/87042.pem /etc/ssl/certs/3ec20f2e.0"
	I0207 21:53:03.980995   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0207 21:53:04.019107   12092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0207 21:53:04.035727   12092 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  7 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I0207 21:53:04.044981   12092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0207 21:53:04.068862   12092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0207 21:53:04.100360   12092 kubeadm.go:390] StartCluster: {Name:newest-cni-20220207214858-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220207214858-8704 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComp
onents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 21:53:04.108510   12092 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 21:53:04.197302   12092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0207 21:53:04.228740   12092 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0207 21:53:04.228829   12092 kubeadm.go:600] restartCluster start
	I0207 21:53:04.238602   12092 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0207 21:53:04.266891   12092 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:04.273383   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:05.778460   11580 system_pods.go:86] 8 kube-system pods found
	I0207 21:53:05.778514   11580 system_pods.go:89] "coredns-5644d7b6d9-5cdl2" [01bc9b5f-cc96-457b-8c41-0a6b2ae0a322] Running
	I0207 21:53:05.778581   11580 system_pods.go:89] "etcd-old-k8s-version-20220207213422-8704" [9843ed1a-4d7b-44a3-89b6-1fe62845b7e8] Running
	I0207 21:53:05.778625   11580 system_pods.go:89] "kube-apiserver-old-k8s-version-20220207213422-8704" [3bbc6549-ecd2-4235-9ca4-c3f9ab75d868] Running
	I0207 21:53:05.778662   11580 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220207213422-8704" [95e98028-a3f7-4bea-ae02-72175664e79e] Running
	I0207 21:53:05.778710   11580 system_pods.go:89] "kube-proxy-9kjsq" [01a609fb-de7f-44e4-842c-d8809e8d07b6] Running
	I0207 21:53:05.778710   11580 system_pods.go:89] "kube-scheduler-old-k8s-version-20220207213422-8704" [5973b363-469a-49c1-95fe-942330710096] Running
	I0207 21:53:05.778770   11580 system_pods.go:89] "metrics-server-5b7b789f-nkphv" [86c8acbf-55aa-4568-a55d-61add7a8d512] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:53:05.778770   11580 system_pods.go:89] "storage-provisioner" [9f745ff0-c77e-4ca6-b41f-2ea70e2c3047] Running
	I0207 21:53:05.778770   11580 system_pods.go:126] duration metric: took 56.7461108s to wait for k8s-apps to be running ...
	I0207 21:53:05.778928   11580 system_svc.go:44] waiting for kubelet service to be running ....
	I0207 21:53:05.789741   11580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 21:53:05.826213   11580 system_svc.go:56] duration metric: took 46.7814ms WaitForService to wait for kubelet.
	I0207 21:53:05.826737   11580 kubeadm.go:547] duration metric: took 1m8.447686s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0207 21:53:05.826737   11580 node_conditions.go:102] verifying NodePressure condition ...
	I0207 21:53:05.837378   11580 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0207 21:53:05.837378   11580 node_conditions.go:123] node cpu capacity is 16
	I0207 21:53:05.837378   11580 node_conditions.go:105] duration metric: took 10.6411ms to run NodePressure ...
	I0207 21:53:05.837378   11580 start.go:213] waiting for startup goroutines ...
	I0207 21:53:06.073970   11580 start.go:496] kubectl: 1.18.2, cluster: 1.16.0 (minor skew: 2)
	I0207 21:53:06.076808   11580 out.go:176] 
	W0207 21:53:06.077353   11580 out.go:241] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.16.0.
	I0207 21:53:06.080693   11580 out.go:176]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0207 21:53:06.084575   11580 out.go:176] * Done! kubectl is now configured to use "old-k8s-version-20220207213422-8704" cluster and "default" namespace by default
	I0207 21:53:05.529959   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.2565698s)
	I0207 21:53:05.531587   12092 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220207214858-8704" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:53:05.532012   12092 kubeconfig.go:127] "newest-cni-20220207214858-8704" context is missing from C:\Users\jenkins.minikube3\minikube-integration\kubeconfig - will repair!
	I0207 21:53:05.533000   12092 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:53:05.563892   12092 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0207 21:53:05.595881   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:05.605843   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:05.647854   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:05.847966   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:05.859277   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:05.895772   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.048096   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.057209   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.103801   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.248504   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.257096   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.312848   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.448941   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.457634   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.501542   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.648189   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.659268   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.700131   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:06.848083   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:06.858569   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:06.897798   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.048754   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.054954   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.095967   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.248316   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.255277   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.298561   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.448170   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.458503   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.500346   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.648728   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.657589   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.695400   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:07.848158   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:07.857017   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:07.906826   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.048344   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.058531   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.097942   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.249948   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.261285   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.308426   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.449622   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.458790   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.498465   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.649676   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.657676   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.694952   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.694952   12092 api_server.go:165] Checking apiserver status ...
	I0207 21:53:08.703857   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0207 21:53:08.747727   12092 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:08.748265   12092 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0207 21:53:08.748265   12092 kubeadm.go:1066] stopping kube-system containers ...
	I0207 21:53:08.755406   12092 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 21:53:08.865555   12092 docker.go:438] Stopping containers: [b6a2dc2c489b 3ba6dbc5286c 2f5a838b3186 070a03c8bd2b 6f00f94a35a9 f2bf4b900019 4ac0d9369b0d 88824d43ae8a 5b4d76674230 e20e4197b8ba f671581d2d29 c692be2dbcb4 c1cc2af84422 e4eaa53a6243 6a752c653974 49b54541465a 656c3ef9ebc4 88893c4a87f5 6b8821375d93 aa6156ae792e]
	I0207 21:53:08.871802   12092 ssh_runner.go:195] Run: docker stop b6a2dc2c489b 3ba6dbc5286c 2f5a838b3186 070a03c8bd2b 6f00f94a35a9 f2bf4b900019 4ac0d9369b0d 88824d43ae8a 5b4d76674230 e20e4197b8ba f671581d2d29 c692be2dbcb4 c1cc2af84422 e4eaa53a6243 6a752c653974 49b54541465a 656c3ef9ebc4 88893c4a87f5 6b8821375d93 aa6156ae792e
	I0207 21:53:08.973505   12092 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0207 21:53:09.007507   12092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0207 21:53:09.070754   12092 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Feb  7 21:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb  7 21:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Feb  7 21:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb  7 21:51 /etc/kubernetes/scheduler.conf
	
	I0207 21:53:09.080114   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0207 21:53:09.111535   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0207 21:53:09.144106   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0207 21:53:09.176181   12092 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:09.186687   12092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0207 21:53:09.222307   12092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0207 21:53:09.248885   12092 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0207 21:53:09.255874   12092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0207 21:53:09.283358   12092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0207 21:53:09.312803   12092 kubeadm.go:677] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0207 21:53:09.312803   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:09.466045   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:10.566510   12092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0996125s)
	I0207 21:53:10.566510   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:10.873493   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:11.062971   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:11.335287   12092 api_server.go:51] waiting for apiserver process to appear ...
	I0207 21:53:11.346444   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:11.900445   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:12.401838   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:12.905595   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:13.398529   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:13.899974   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:14.401600   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:14.902699   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:15.401095   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:15.901362   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:16.400407   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:16.902910   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:17.401281   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:17.900681   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:18.134301   12092 api_server.go:71] duration metric: took 6.7989789s to wait for apiserver process to appear ...
	I0207 21:53:18.134301   12092 api_server.go:87] waiting for apiserver healthz status ...
	I0207 21:53:18.134301   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:18.141046   12092 api_server.go:256] stopped: https://127.0.0.1:50327/healthz: Get "https://127.0.0.1:50327/healthz": EOF
	I0207 21:53:18.641184   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:18.647019   12092 api_server.go:256] stopped: https://127.0.0.1:50327/healthz: Get "https://127.0.0.1:50327/healthz": EOF
	I0207 21:53:19.141857   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:24.044519   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0207 21:53:24.044519   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0207 21:53:24.142580   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:24.334424   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:24.334659   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:24.642472   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:24.730976   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:24.731124   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:25.141713   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:25.336133   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:25.336133   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:25.642280   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:25.741267   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:25.741342   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:26.142675   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:26.183809   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:26.183809   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:26.641588   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:26.750956   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:26.751034   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:27.141495   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:27.239257   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0207 21:53:27.239257   12092 api_server.go:102] status: https://127.0.0.1:50327/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0207 21:53:27.642810   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:27.830830   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 200:
	ok
	I0207 21:53:27.932194   12092 api_server.go:140] control plane version: v1.23.4-rc.0
	I0207 21:53:27.932194   12092 api_server.go:130] duration metric: took 9.797843s to wait for apiserver health ...
	I0207 21:53:27.932358   12092 cni.go:93] Creating CNI manager for ""
	I0207 21:53:27.932358   12092 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 21:53:27.932457   12092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0207 21:53:28.066248   12092 system_pods.go:59] 8 kube-system pods found
	I0207 21:53:28.066321   12092 system_pods.go:61] "coredns-64897985d-dx8qt" [1ae104b4-0012-42d0-8649-cad69e3edb18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0207 21:53:28.066321   12092 system_pods.go:61] "etcd-newest-cni-20220207214858-8704" [d3f0b527-3e63-4a70-bfa7-69de8f22b952] Running
	I0207 21:53:28.066321   12092 system_pods.go:61] "kube-apiserver-newest-cni-20220207214858-8704" [46181458-88de-4f57-857c-e0fb3238a62b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0207 21:53:28.066321   12092 system_pods.go:61] "kube-controller-manager-newest-cni-20220207214858-8704" [2624fa05-b365-4660-ba23-cd6be7b7cc3e] Running
	I0207 21:53:28.066321   12092 system_pods.go:61] "kube-proxy-fhm4g" [176adc08-9f04-43f1-91a1-f4ee9bd5568e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0207 21:53:28.066410   12092 system_pods.go:61] "kube-scheduler-newest-cni-20220207214858-8704" [f3a65188-d868-4d32-9375-d4bb240b9955] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0207 21:53:28.066410   12092 system_pods.go:61] "metrics-server-7f49dcbd7-hmd9h" [5f767c53-9428-45f1-b81f-05359a08115b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:53:28.066410   12092 system_pods.go:61] "storage-provisioner" [c4ec0fef-4e5a-4d6b-8ab7-e35dd9652808] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0207 21:53:28.066410   12092 system_pods.go:74] duration metric: took 133.9524ms to wait for pod list to return data ...
	I0207 21:53:28.066410   12092 node_conditions.go:102] verifying NodePressure condition ...
	I0207 21:53:28.150539   12092 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0207 21:53:28.150605   12092 node_conditions.go:123] node cpu capacity is 16
	I0207 21:53:28.150645   12092 node_conditions.go:105] duration metric: took 84.2351ms to run NodePressure ...
	I0207 21:53:28.150705   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0207 21:53:30.731007   12092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.5801938s)
	I0207 21:53:30.731007   12092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0207 21:53:30.931055   12092 ops.go:34] apiserver oom_adj: -16
	I0207 21:53:30.931209   12092 kubeadm.go:604] restartCluster took 26.7022436s
	I0207 21:53:30.931330   12092 kubeadm.go:392] StartCluster complete in 26.830712s
	I0207 21:53:30.931330   12092 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:53:30.931715   12092 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 21:53:30.933904   12092 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 21:53:31.035812   12092 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220207214858-8704" rescaled to 1
	I0207 21:53:31.036030   12092 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 21:53:31.036030   12092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0207 21:53:31.036030   12092 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0207 21:53:31.038583   12092 out.go:176] * Verifying Kubernetes components...
	I0207 21:53:31.036345   12092 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220207214858-8704"
	I0207 21:53:31.036345   12092 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220207214858-8704"
	I0207 21:53:31.039998   12092 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220207214858-8704"
	I0207 21:53:31.036345   12092 addons.go:65] Setting dashboard=true in profile "newest-cni-20220207214858-8704"
	W0207 21:53:31.039998   12092 addons.go:165] addon metrics-server should already be in state true
	I0207 21:53:31.036345   12092 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220207214858-8704"
	I0207 21:53:31.040294   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:31.037010   12092 config.go:176] Loaded profile config "newest-cni-20220207214858-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4-rc.0
	I0207 21:53:31.040478   12092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220207214858-8704"
	I0207 21:53:31.039901   12092 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220207214858-8704"
	I0207 21:53:31.040294   12092 addons.go:153] Setting addon dashboard=true in "newest-cni-20220207214858-8704"
	W0207 21:53:31.040635   12092 addons.go:165] addon storage-provisioner should already be in state true
	W0207 21:53:31.040635   12092 addons.go:165] addon dashboard should already be in state true
	I0207 21:53:31.040635   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:31.040635   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:31.059259   12092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 21:53:31.066242   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.066242   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.067749   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.068263   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:31.578427   12092 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0207 21:53:31.584423   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.756335   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.6885349s)
	I0207 21:53:32.759328   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.693077s)
	I0207 21:53:32.759328   12092 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0207 21:53:32.761327   12092 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0207 21:53:32.762326   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0207 21:53:32.762326   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0207 21:53:32.770342   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.7020706s)
	I0207 21:53:32.772326   12092 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0207 21:53:32.772326   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.773326   12092 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 21:53:32.773326   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0207 21:53:32.780329   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.788343   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.7220914s)
	I0207 21:53:32.791325   12092 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0207 21:53:32.791325   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0207 21:53:32.791325   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0207 21:53:32.803384   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:32.805325   12092 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220207214858-8704"
	W0207 21:53:32.805325   12092 addons.go:165] addon default-storageclass should already be in state true
	I0207 21:53:32.805325   12092 host.go:66] Checking if "newest-cni-20220207214858-8704" exists ...
	I0207 21:53:32.818326   12092 cli_runner.go:133] Run: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}
	I0207 21:53:33.366148   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.7815798s)
	I0207 21:53:33.366148   12092 api_server.go:51] waiting for apiserver process to appear ...
	I0207 21:53:33.380571   12092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 21:53:33.458680   12092 api_server.go:71] duration metric: took 2.4226376s to wait for apiserver process to appear ...
	I0207 21:53:33.459679   12092 api_server.go:87] waiting for apiserver healthz status ...
	I0207 21:53:33.459679   12092 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50327/healthz ...
	I0207 21:53:33.543694   12092 api_server.go:266] https://127.0.0.1:50327/healthz returned 200:
	ok
	I0207 21:53:33.549688   12092 api_server.go:140] control plane version: v1.23.4-rc.0
	I0207 21:53:33.549688   12092 api_server.go:130] duration metric: took 90.0087ms to wait for apiserver health ...
	I0207 21:53:33.549688   12092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0207 21:53:33.640653   12092 system_pods.go:59] 8 kube-system pods found
	I0207 21:53:33.640653   12092 system_pods.go:61] "coredns-64897985d-dx8qt" [1ae104b4-0012-42d0-8649-cad69e3edb18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0207 21:53:33.640653   12092 system_pods.go:61] "etcd-newest-cni-20220207214858-8704" [d3f0b527-3e63-4a70-bfa7-69de8f22b952] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-apiserver-newest-cni-20220207214858-8704" [46181458-88de-4f57-857c-e0fb3238a62b] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-controller-manager-newest-cni-20220207214858-8704" [2624fa05-b365-4660-ba23-cd6be7b7cc3e] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-proxy-fhm4g" [176adc08-9f04-43f1-91a1-f4ee9bd5568e] Running
	I0207 21:53:33.640653   12092 system_pods.go:61] "kube-scheduler-newest-cni-20220207214858-8704" [f3a65188-d868-4d32-9375-d4bb240b9955] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0207 21:53:33.640653   12092 system_pods.go:61] "metrics-server-7f49dcbd7-hmd9h" [5f767c53-9428-45f1-b81f-05359a08115b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0207 21:53:33.640653   12092 system_pods.go:61] "storage-provisioner" [c4ec0fef-4e5a-4d6b-8ab7-e35dd9652808] Running
	I0207 21:53:33.640653   12092 system_pods.go:74] duration metric: took 90.9649ms to wait for pod list to return data ...
	I0207 21:53:33.640653   12092 default_sa.go:34] waiting for default service account to be created ...
	I0207 21:53:33.649659   12092 default_sa.go:45] found service account: "default"
	I0207 21:53:33.649659   12092 default_sa.go:55] duration metric: took 9.0059ms for default service account to be created ...
	I0207 21:53:33.649659   12092 kubeadm.go:547] duration metric: took 2.613616s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0207 21:53:33.649659   12092 node_conditions.go:102] verifying NodePressure condition ...
	I0207 21:53:33.660459   12092 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0207 21:53:33.660510   12092 node_conditions.go:123] node cpu capacity is 16
	I0207 21:53:33.660657   12092 node_conditions.go:105] duration metric: took 10.9974ms to run NodePressure ...
	I0207 21:53:33.660714   12092 start.go:213] waiting for startup goroutines ...
	I0207 21:53:34.397704   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.6173673s)
	I0207 21:53:34.397704   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:34.405700   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.6333659s)
	I0207 21:53:34.405700   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:34.454307   12092 cli_runner.go:186] Completed: docker container inspect newest-cni-20220207214858-8704 --format={{.State.Status}}: (1.6359727s)
	I0207 21:53:34.454307   12092 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0207 21:53:34.454307   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0207 21:53:34.464301   12092 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704
	I0207 21:53:34.473346   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.6699527s)
	I0207 21:53:34.473346   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:34.729665   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0207 21:53:34.729765   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0207 21:53:34.750147   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 21:53:34.773491   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0207 21:53:34.773624   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0207 21:53:34.840296   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0207 21:53:34.840296   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0207 21:53:35.048894   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0207 21:53:35.049466   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0207 21:53:35.138401   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0207 21:53:35.138994   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0207 21:53:35.253124   12092 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0207 21:53:35.253124   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0207 21:53:35.260903   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0207 21:53:35.260903   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0207 21:53:35.368241   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0207 21:53:35.368241   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0207 21:53:35.446716   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0207 21:53:35.486365   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0207 21:53:35.486455   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0207 21:53:35.646044   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0207 21:53:35.646044   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0207 21:53:35.853210   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0207 21:53:35.853300   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0207 21:53:35.947727   12092 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220207214858-8704: (1.4834179s)
	I0207 21:53:35.947727   12092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50323 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-20220207214858-8704\id_rsa Username:docker}
	I0207 21:53:36.030033   12092 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0207 21:53:36.030033   12092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0207 21:53:36.254045   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0207 21:53:36.549494   12092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0207 21:53:39.832923   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.0826985s)
	I0207 21:53:39.933783   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.4869993s)
	I0207 21:53:39.933783   12092 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220207214858-8704"
	I0207 21:53:40.832790   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.5786592s)
	I0207 21:53:40.832790   12092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.2832737s)
	I0207 21:53:40.836562   12092 out.go:176] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0207 21:53:40.836700   12092 addons.go:417] enableAddons completed in 9.8006192s
	I0207 21:53:41.108838   12092 start.go:496] kubectl: 1.18.2, cluster: 1.23.4-rc.0 (minor skew: 5)
	I0207 21:53:41.113007   12092 out.go:176] 
	W0207 21:53:41.113403   12092 out.go:241] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.4-rc.0.
	I0207 21:53:41.116657   12092 out.go:176]   - Want kubectl v1.23.4-rc.0? Try 'minikube kubectl -- get pods -A'
	I0207 21:53:41.118871   12092 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220207214858-8704" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-02-07 21:45:36 UTC, end at Mon 2022-02-07 21:54:25 UTC. --
	Feb 07 21:51:16 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:16.698548200Z" level=info msg="ignoring event" container=f5830bb53007aa7be5c4765c752c19e5f89f4f78e4af1af2e9918de8ade404da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:51:17 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:17.062198200Z" level=info msg="ignoring event" container=dd147702e86e6f6f59b5d8e1883cb1f4fd0a89223f3c4cf779d2ca3c82e09ad1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:51:17 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:17.375602800Z" level=info msg="ignoring event" container=587d7811e2e96af58cdb29939816249575a7ccf94d5a2a6a12f9f411c5507a23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:51:59 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:59.277849300Z" level=error msg="stream copy error: reading from a closed fifo"
	Feb 07 21:51:59 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:59.278715800Z" level=error msg="stream copy error: reading from a closed fifo"
	Feb 07 21:51:59 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:51:59.890502200Z" level=error msg="98ec4b0adb654678a5b43fe0fa245576846bb20cc2fb85577be4751d8b759595 cleanup: failed to delete container from containerd: no such container"
	Feb 07 21:52:10 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:10.002669900Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:10 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:10.002844200Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:10 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:10.014957800Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:11 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:11.465357400Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 21:52:11 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:11.608170500Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 21:52:28 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:28.186404800Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:28 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:28.186530500Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:28 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:28.215719200Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:30 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:30.180779200Z" level=info msg="ignoring event" container=e4d602b4d59293ac01c1a84338f807eceafc9b10590abe9375acd246b2409fa3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:52:31 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:31.614264400Z" level=info msg="ignoring event" container=c550ec152abeeff30b4d9b806cb5db52c1cebed690f5288c7dde5a28e6743c16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:52:50 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:50.187747600Z" level=info msg="ignoring event" container=cc91b93cba5368742c32b6c07ded587705800165fc825d5f1b6a719767e71315 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:57.709378100Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:57.709557100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:52:57.721996900Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:25 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:25.279377500Z" level=info msg="ignoring event" container=7247de69d44bed3976d35ba6dc8ba10942fed649986a45445397d342a938b06c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:40.709316200Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:40.709518000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:53:40.723078400Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:54:14 old-k8s-version-20220207213422-8704 dockerd[213]: time="2022-02-07T21:54:14.301721700Z" level=info msg="ignoring event" container=0bbff3c4a0c626c9128f40bcbeeac2a50f91267197cf0d0f81ca15ab8dab2ef2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	0bbff3c4a0c62       a90209bb39e3d       12 seconds ago      Exited              dashboard-metrics-scraper   4                   e895657da88ca
	60f4881418fd1       e1482a24335a6       2 minutes ago       Running             kubernetes-dashboard        0                   c72cc9964cd1f
	a8ed79fbdc062       6e38f40d628db       2 minutes ago       Running             storage-provisioner         0                   c73f8f93ca8f4
	a85fc6a2084b6       bf261d1579144       2 minutes ago       Running             coredns                     0                   a0e2dff2130f1
	ec95e16fcc809       c21b0c7400f98       2 minutes ago       Running             kube-proxy                  0                   ef7860df7145a
	afe896f092f1d       301ddc62b80b1       2 minutes ago       Running             kube-scheduler              0                   0cb1e1b733c50
	8ae07c05261a3       06a629a7e51cd       2 minutes ago       Running             kube-controller-manager     0                   bcf6997955a74
	cc8e4f4dc07cd       b305571ca60a5       2 minutes ago       Running             kube-apiserver              0                   7fbdc6ce27662
	380179822a91f       b2756210eeabf       2 minutes ago       Running             etcd                        0                   2c1104f4ac7fa
	
	* 
	* ==> coredns [a85fc6a2084b] <==
	* .:53
	2022-02-07T21:52:01.908Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-02-07T21:52:01.982Z [INFO] CoreDNS-1.6.2
	2022-02-07T21:52:01.982Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-02-07T21:52:27.938Z [INFO] plugin/reload: Running configuration MD5 = 034a4984a79adc08e57427d1bc08b68f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220207213422-8704
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220207213422-8704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb
	                    minikube.k8s.io/name=old-k8s-version-20220207213422-8704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_02_07T21_51_41_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Feb 2022 21:51:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Feb 2022 21:53:59 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Feb 2022 21:53:59 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Feb 2022 21:53:59 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Feb 2022 21:53:59 +0000   Mon, 07 Feb 2022 21:51:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220207213422-8704
	Capacity:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52646744Ki
	 pods:               110
	Allocatable:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52646744Ki
	 pods:               110
	System Info:
	 Machine ID:                 f0d9fc3b84d34ab4ba684459888f0938
	 System UUID:                f0d9fc3b84d34ab4ba684459888f0938
	 Boot ID:                    63de5e8a-b025-4a3e-80b6-1ee5f15fec4d
	 Kernel Version:             5.10.16.3-microsoft-standard-WSL2
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-5cdl2                                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m31s
	  kube-system                etcd-old-k8s-version-20220207213422-8704                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                kube-apiserver-old-k8s-version-20220207213422-8704             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                kube-controller-manager-old-k8s-version-20220207213422-8704    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                kube-proxy-9kjsq                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                kube-scheduler-old-k8s-version-20220207213422-8704             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                metrics-server-5b7b789f-nkphv                                  100m (0%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m21s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-jz2sz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kubernetes-dashboard       kubernetes-dashboard-766959b846-27bjs                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                             Message
	  ----    ------                   ----                 ----                                             -------
	  Normal  NodeHasSufficientMemory  3m2s (x8 over 3m3s)  kubelet, old-k8s-version-20220207213422-8704     Node old-k8s-version-20220207213422-8704 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x8 over 3m3s)  kubelet, old-k8s-version-20220207213422-8704     Node old-k8s-version-20220207213422-8704 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x7 over 3m3s)  kubelet, old-k8s-version-20220207213422-8704     Node old-k8s-version-20220207213422-8704 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m26s                kube-proxy, old-k8s-version-20220207213422-8704  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000199] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000382] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Feb 7 21:17] WSL2: Performing memory compaction.
	[Feb 7 21:18] WSL2: Performing memory compaction.
	[Feb 7 21:20] WSL2: Performing memory compaction.
	[Feb 7 21:21] WSL2: Performing memory compaction.
	[Feb 7 21:24] WSL2: Performing memory compaction.
	[Feb 7 21:25] WSL2: Performing memory compaction.
	[Feb 7 21:26] WSL2: Performing memory compaction.
	[Feb 7 21:28] WSL2: Performing memory compaction.
	[Feb 7 21:29] WSL2: Performing memory compaction.
	[Feb 7 21:30] WSL2: Performing memory compaction.
	[Feb 7 21:31] WSL2: Performing memory compaction.
	[Feb 7 21:32] WSL2: Performing memory compaction.
	[Feb 7 21:34] WSL2: Performing memory compaction.
	[Feb 7 21:35] WSL2: Performing memory compaction.
	[Feb 7 21:38] WSL2: Performing memory compaction.
	[Feb 7 21:45] WSL2: Performing memory compaction.
	[Feb 7 21:48] WSL2: Performing memory compaction.
	[Feb 7 21:49] WSL2: Performing memory compaction.
	[Feb 7 21:50] WSL2: Performing memory compaction.
	[Feb 7 21:52] WSL2: Performing memory compaction.
	[Feb 7 21:54] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [380179822a91] <==
	* 2022-02-07 21:51:36.277959 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20220207213422-8704\" " with result "range_response_count:0 size:4" took too long (190.2189ms) to execute
	2022-02-07 21:51:36.391547 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20220207213422-8704\" " with result "range_response_count:1 size:3010" took too long (108.2902ms) to execute
	2022-02-07 21:51:36.391842 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:114" took too long (108.1925ms) to execute
	2022-02-07 21:51:36.391880 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-20220207213422-8704\" " with result "range_response_count:0 size:4" took too long (107.7159ms) to execute
	2022-02-07 21:51:36.391962 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:119" took too long (108ms) to execute
	2022-02-07 21:51:36.392077 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.81ms) to execute
	2022-02-07 21:51:36.392130 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (107.8673ms) to execute
	2022-02-07 21:51:55.979870 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:1152" took too long (100.0735ms) to execute
	2022-02-07 21:52:03.092211 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:0 size:5" took too long (102.3648ms) to execute
	2022-02-07 21:52:03.990149 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20220207213422-8704\" " with result "range_response_count:1 size:3387" took too long (100.8329ms) to execute
	2022-02-07 21:52:04.516220 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (127.6192ms) to execute
	2022-02-07 21:52:04.697498 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:1 size:481" took too long (114.9509ms) to execute
	2022-02-07 21:52:04.825561 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (131.7702ms) to execute
	2022-02-07 21:52:05.021323 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" " with result "range_response_count:0 size:5" took too long (103.5511ms) to execute
	2022-02-07 21:52:05.484899 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:1 size:547" took too long (101.3322ms) to execute
	2022-02-07 21:52:05.485172 W | etcdserver: read-only range request "key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" " with result "range_response_count:0 size:5" took too long (100.3297ms) to execute
	2022-02-07 21:52:06.000966 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (115.8329ms) to execute
	2022-02-07 21:52:06.001324 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989.16d19f835ba86acc\" " with result "range_response_count:1 size:695" took too long (119.0593ms) to execute
	2022-02-07 21:52:06.086597 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:1 size:547" took too long (103.2956ms) to execute
	2022-02-07 21:52:06.086870 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (101.341ms) to execute
	2022-02-07 21:52:06.478298 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-766959b846.16d19f8366d52a5c\" " with result "range_response_count:1 size:675" took too long (183.0937ms) to execute
	2022-02-07 21:52:07.187048 W | etcdserver: request "header:<ID:15638326214409946467 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:807 >> failure:<>>" with result "size:16" took too long (106.3223ms) to execute
	2022-02-07 21:52:07.583357 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20220207213422-8704\" " with result "range_response_count:1 size:3387" took too long (102.7764ms) to execute
	2022-02-07 21:52:15.278546 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (385.343ms) to execute
	2022-02-07 21:52:29.637723 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (334.2243ms) to execute
	
	* 
	* ==> kernel <==
	*  21:54:26 up  2:38,  0 users,  load average: 3.02, 4.95, 5.16
	Linux old-k8s-version-20220207213422-8704 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [cc8e4f4dc07c] <==
	* I0207 21:51:36.923846       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0207 21:51:36.932083       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0207 21:51:36.938438       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0207 21:51:36.938542       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0207 21:51:38.706388       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0207 21:51:38.985858       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0207 21:51:39.338988       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0207 21:51:39.340245       1 controller.go:606] quota admission added evaluator for: endpoints
	I0207 21:51:40.304132       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0207 21:51:41.017554       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0207 21:51:41.241808       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0207 21:51:42.904376       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0207 21:51:55.684436       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0207 21:51:55.991883       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0207 21:51:56.080610       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0207 21:52:09.106686       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 21:52:09.108136       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 21:52:09.178663       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 21:52:09.178794       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0207 21:53:09.183002       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 21:53:09.185877       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 21:53:09.186544       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 21:53:09.186714       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8ae07c05261a] <==
	* I0207 21:52:05.879595       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-5b7b789f", UID:"023fdc8f-9b00-401c-a8d3-f31136d70fad", APIVersion:"apps/v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-5b7b789f-nkphv
	E0207 21:52:05.881799       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:05.881799       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:05.881819       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:05.882478       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:05.982034       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:05.982093       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.003292       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.003738       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.090415       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.090477       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.090485       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.090527       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:06.180549       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 21:52:06.180656       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 21:52:07.195431       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"75c5be17-10f1-4c66-aecc-7ea1aa804bbd", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-766959b846-27bjs
	I0207 21:52:07.389247       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e661e53b-2497-4d1f-9b73-b24d77164efc", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-jz2sz
	E0207 21:52:26.632896       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 21:52:28.081541       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 21:52:56.887397       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 21:53:00.094530       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 21:53:27.142277       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 21:53:32.100896       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 21:53:57.403863       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 21:54:04.108750       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [ec95e16fcc80] <==
	* W0207 21:52:00.479224       1 proxier.go:584] Failed to read file /lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.482359       1 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.484777       1 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.486960       1 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.490107       1 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.494636       1 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0207 21:52:00.587897       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0207 21:52:00.623652       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0207 21:52:00.623799       1 server_others.go:149] Using iptables Proxier.
	I0207 21:52:00.626264       1 server.go:529] Version: v1.16.0
	I0207 21:52:00.628350       1 config.go:131] Starting endpoints config controller
	I0207 21:52:00.628643       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0207 21:52:00.628861       1 config.go:313] Starting service config controller
	I0207 21:52:00.629011       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0207 21:52:00.729500       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0207 21:52:00.729642       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [afe896f092f1] <==
	* W0207 21:51:36.180603       1 authentication.go:79] Authentication is disabled
	I0207 21:51:36.180649       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0207 21:51:36.181984       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0207 21:51:36.286841       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 21:51:36.286848       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 21:51:36.286911       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 21:51:36.286956       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 21:51:36.286960       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 21:51:36.287554       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:36.287896       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:36.288003       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 21:51:36.288872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 21:51:36.289232       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0207 21:51:36.381314       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 21:51:37.289261       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 21:51:37.291720       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 21:51:37.380484       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 21:51:37.383892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 21:51:37.384911       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 21:51:37.385754       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 21:51:37.389743       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:37.391425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 21:51:37.391816       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 21:51:37.393133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 21:51:37.393190       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-02-07 21:45:36 UTC, end at Mon 2022-02-07 21:54:26 UTC. --
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:57.724000    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:57.724283    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:52:57 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:52:57.724353    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:09 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:09.634053    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:12 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:12.637252    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:53:25 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:53:25.124265    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:53:25 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:25.637010    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:53:26 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:53:26.213468    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:53:26 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:26.227658    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:27 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:53:27.244435    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:53:34 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:34.880965    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.724329    5553 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.725681    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.725774    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Feb 07 21:53:40 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:40.725810    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Feb 07 21:53:47 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:47.633514    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:53:51 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:53:51.640099    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:54:02 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:54:02.632686    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:54:04 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:54:04.638557    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:54:14 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:54:14.089011    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:54:15 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:54:15.205062    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:54:15 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:54:15.220345    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	Feb 07 21:54:16 old-k8s-version-20220207213422-8704 kubelet[5553]: W0207 21:54:16.236243    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-jz2sz through plugin: invalid network status for
	Feb 07 21:54:17 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:54:17.642753    5553 pod_workers.go:191] Error syncing pod 86c8acbf-55aa-4568-a55d-61add7a8d512 ("metrics-server-5b7b789f-nkphv_kube-system(86c8acbf-55aa-4568-a55d-61add7a8d512)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 21:54:24 old-k8s-version-20220207213422-8704 kubelet[5553]: E0207 21:54:24.884166    5553 pod_workers.go:191] Error syncing pod 8c81d472-4437-4f06-9a37-da1d85d8b073 ("dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-jz2sz_kubernetes-dashboard(8c81d472-4437-4f06-9a37-da1d85d8b073)"
	
	* 
	* ==> kubernetes-dashboard [60f4881418fd] <==
	* 2022/02/07 21:52:11 Starting overwatch
	2022/02/07 21:52:11 Using namespace: kubernetes-dashboard
	2022/02/07 21:52:11 Using in-cluster config to connect to apiserver
	2022/02/07 21:52:11 Using secret token for csrf signing
	2022/02/07 21:52:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/02/07 21:52:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/02/07 21:52:11 Successful initial request to the apiserver, version: v1.16.0
	2022/02/07 21:52:11 Generating JWE encryption key
	2022/02/07 21:52:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/02/07 21:52:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/02/07 21:52:12 Initializing JWE encryption key from synchronized object
	2022/02/07 21:52:12 Creating in-cluster Sidecar client
	2022/02/07 21:52:13 Serving insecurely on HTTP port: 9090
	2022/02/07 21:52:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 21:52:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 21:53:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 21:53:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 21:54:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [a8ed79fbdc06] <==
	* I0207 21:52:08.380934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0207 21:52:08.582470       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0207 21:52:08.583083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0207 21:52:08.679909       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0207 21:52:08.680194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df1c4e9a-abf2-4450-a251-b193d27d1266", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220207213422-8704_e02cf656-cd1c-466d-b4a8-c6ae790907b4 became leader
	I0207 21:52:08.680316       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207213422-8704_e02cf656-cd1c-466d-b4a8-c6ae790907b4!
	I0207 21:52:08.781027       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207213422-8704_e02cf656-cd1c-466d-b4a8-c6ae790907b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704: (8.0573067s)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E0207 21:54:35.870318    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.
helpers_test.go:271: non-running pods: metrics-server-5b7b789f-nkphv
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 describe pod metrics-server-5b7b789f-nkphv
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220207213422-8704 describe pod metrics-server-5b7b789f-nkphv: exit status 1 (417.1578ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5b7b789f-nkphv" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220207213422-8704 describe pod metrics-server-5b7b789f-nkphv: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (64.46s)

                                                
                                    

Test pass (237/273)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.78
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.34
10 TestDownloadOnly/v1.23.3/json-events 15.96
11 TestDownloadOnly/v1.23.3/preload-exists 0
14 TestDownloadOnly/v1.23.3/kubectl 0
15 TestDownloadOnly/v1.23.3/LogsDuration 0.31
17 TestDownloadOnly/v1.23.4-rc.0/json-events 15.81
18 TestDownloadOnly/v1.23.4-rc.0/preload-exists 0
21 TestDownloadOnly/v1.23.4-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.4-rc.0/LogsDuration 0.31
23 TestDownloadOnly/DeleteAll 12.73
24 TestDownloadOnly/DeleteAlwaysSucceeds 7.32
25 TestDownloadOnlyKic 57.65
26 TestBinaryMirror 16.77
27 TestOffline 285.63
29 TestAddons/Setup 495.43
34 TestAddons/parallel/HelmTiller 30.32
36 TestAddons/parallel/CSI 69.72
38 TestAddons/serial/GCPAuth 29.11
39 TestAddons/StoppedEnableDisable 25.69
40 TestCertOptions 226.64
41 TestCertExpiration 467.2
43 TestForceSystemdFlag 263.31
49 TestErrorSpam/setup 132.17
50 TestErrorSpam/start 21.78
51 TestErrorSpam/status 20.64
52 TestErrorSpam/pause 17.6
53 TestErrorSpam/unpause 18.47
54 TestErrorSpam/stop 35.06
57 TestFunctional/serial/CopySyncFile 0.03
58 TestFunctional/serial/StartWithProxy 171.14
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 37.55
61 TestFunctional/serial/KubeContext 0.23
62 TestFunctional/serial/KubectlGetPods 0.35
65 TestFunctional/serial/CacheCmd/cache/add_remote 18.32
66 TestFunctional/serial/CacheCmd/cache/add_local 9.64
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.32
68 TestFunctional/serial/CacheCmd/cache/list 0.3
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 6.7
70 TestFunctional/serial/CacheCmd/cache/cache_reload 25.95
71 TestFunctional/serial/CacheCmd/cache/delete 0.62
72 TestFunctional/serial/MinikubeKubectlCmd 2.31
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.79
74 TestFunctional/serial/ExtraConfig 64.88
75 TestFunctional/serial/ComponentHealth 0.29
76 TestFunctional/serial/LogsCmd 7.99
77 TestFunctional/serial/LogsFileCmd 7.84
79 TestFunctional/parallel/ConfigCmd 1.98
81 TestFunctional/parallel/DryRun 13.25
82 TestFunctional/parallel/InternationalLanguage 5.6
83 TestFunctional/parallel/StatusCmd 22.6
87 TestFunctional/parallel/AddonsCmd 3.35
88 TestFunctional/parallel/PersistentVolumeClaim 50.99
90 TestFunctional/parallel/SSHCmd 15.29
91 TestFunctional/parallel/CpCmd 28.06
92 TestFunctional/parallel/MySQL 72.42
93 TestFunctional/parallel/FileSync 7.73
94 TestFunctional/parallel/CertSync 46
98 TestFunctional/parallel/NodeLabels 0.3
100 TestFunctional/parallel/NonActiveRuntimeDisabled 7.55
102 TestFunctional/parallel/ProfileCmd/profile_not_create 10.65
103 TestFunctional/parallel/ProfileCmd/profile_list 7.59
104 TestFunctional/parallel/ProfileCmd/profile_json_output 7.66
105 TestFunctional/parallel/Version/short 0.38
106 TestFunctional/parallel/Version/components 10.6
107 TestFunctional/parallel/ImageCommands/ImageListShort 5.57
108 TestFunctional/parallel/ImageCommands/ImageListTable 4.35
109 TestFunctional/parallel/ImageCommands/ImageListJson 4.5
110 TestFunctional/parallel/ImageCommands/ImageListYaml 4.48
111 TestFunctional/parallel/ImageCommands/ImageBuild 21.33
112 TestFunctional/parallel/ImageCommands/Setup 6.33
113 TestFunctional/parallel/DockerEnv/powershell 29.8
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 18.75
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 15.1
116 TestFunctional/parallel/UpdateContextCmd/no_changes 4.52
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 4.3
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 4.29
119 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 28.89
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.61
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.48
125 TestFunctional/parallel/ImageCommands/ImageRemove 9
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 16.75
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 13.45
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_addon-resizer_images 4.01
135 TestFunctional/delete_my-image_image 1.34
136 TestFunctional/delete_minikube_cached_images 1.3
139 TestIngressAddonLegacy/StartLegacyK8sCluster 171.59
141 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 49.89
142 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 4.92
146 TestJSONOutput/start/Command 172.21
147 TestJSONOutput/start/Audit 0
149 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/pause/Command 6.54
153 TestJSONOutput/pause/Audit 0
155 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/unpause/Command 6.45
159 TestJSONOutput/unpause/Audit 0
161 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/stop/Command 19.84
165 TestJSONOutput/stop/Audit 0
167 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
169 TestErrorJSONOutput 7.56
171 TestKicCustomNetwork/create_custom_network 159.46
172 TestKicCustomNetwork/use_default_bridge_network 149.34
173 TestKicExistingNetwork 166.06
174 TestMainNoArgs 0.29
177 TestMountStart/serial/StartWithMountFirst 58.44
178 TestMountStart/serial/VerifyMountFirst 6.69
179 TestMountStart/serial/StartWithMountSecond 59.58
180 TestMountStart/serial/VerifyMountSecond 6.37
181 TestMountStart/serial/DeleteFirst 21.49
182 TestMountStart/serial/VerifyMountPostDelete 6.46
183 TestMountStart/serial/Stop 9.24
184 TestMountStart/serial/RestartStopped 32.94
185 TestMountStart/serial/VerifyMountPostStop 6.37
188 TestMultiNode/serial/FreshStart2Nodes 305.87
189 TestMultiNode/serial/DeployApp2Nodes 24.97
190 TestMultiNode/serial/PingHostFrom2Pods 9.75
191 TestMultiNode/serial/AddNode 143.38
192 TestMultiNode/serial/ProfileList 6.84
193 TestMultiNode/serial/CopyFile 235.13
194 TestMultiNode/serial/StopNode 32.39
195 TestMultiNode/serial/StartAfterStop 62.64
196 TestMultiNode/serial/RestartKeepsNodes 248.22
197 TestMultiNode/serial/DeleteNode 49.47
198 TestMultiNode/serial/StopMultiNode 44.16
199 TestMultiNode/serial/RestartMultiNode 165.78
200 TestMultiNode/serial/ValidateNameConflict 176.4
204 TestPreload 410.41
205 TestScheduledStopWindows 248.71
207 TestSkaffold 237.03
209 TestInsufficientStorage 137.35
210 TestRunningBinaryUpgrade 427.25
212 TestKubernetesUpgrade 561.71
213 TestMissingContainerUpgrade 730.39
215 TestNoKubernetes/serial/StartNoK8sWithVersion 0.41
216 TestNoKubernetes/serial/StartWithK8s 220.87
229 TestStoppedBinaryUpgrade/Setup 0.8
230 TestStoppedBinaryUpgrade/Upgrade 468.57
231 TestStoppedBinaryUpgrade/MinikubeLogs 24.46
240 TestPause/serial/Start 206.06
241 TestPause/serial/SecondStartNoReconfiguration 44.33
243 TestPause/serial/Pause 7.21
244 TestNetworkPlugins/group/false/Start 627.28
245 TestPause/serial/VerifyStatus 7.53
246 TestPause/serial/Unpause 7.45
247 TestPause/serial/PauseAgain 7.64
248 TestPause/serial/DeletePaused 56.78
251 TestNetworkPlugins/group/calico/Start 245.75
252 TestNetworkPlugins/group/calico/ControllerPod 5.07
253 TestNetworkPlugins/group/calico/KubeletFlags 7.44
254 TestNetworkPlugins/group/calico/NetCatPod 25.03
255 TestNetworkPlugins/group/calico/DNS 0.64
256 TestNetworkPlugins/group/calico/Localhost 0.68
257 TestNetworkPlugins/group/calico/HairPin 0.58
260 TestNetworkPlugins/group/kindnet/Start 200.7
261 TestNetworkPlugins/group/false/KubeletFlags 7.19
262 TestNetworkPlugins/group/false/NetCatPod 19.93
263 TestNetworkPlugins/group/false/DNS 0.62
264 TestNetworkPlugins/group/false/Localhost 0.7
265 TestNetworkPlugins/group/false/HairPin 5.56
266 TestNetworkPlugins/group/bridge/Start 171.59
267 TestNetworkPlugins/group/kindnet/ControllerPod 5.06
268 TestNetworkPlugins/group/kindnet/KubeletFlags 7.46
269 TestNetworkPlugins/group/kindnet/NetCatPod 20.19
270 TestNetworkPlugins/group/kindnet/DNS 0.71
271 TestNetworkPlugins/group/kindnet/Localhost 0.53
272 TestNetworkPlugins/group/kindnet/HairPin 0.58
273 TestNetworkPlugins/group/kubenet/Start 253.25
274 TestNetworkPlugins/group/bridge/KubeletFlags 7.53
275 TestNetworkPlugins/group/bridge/NetCatPod 28.87
276 TestNetworkPlugins/group/bridge/DNS 0.65
277 TestNetworkPlugins/group/bridge/Localhost 0.6
278 TestNetworkPlugins/group/bridge/HairPin 0.6
280 TestStartStop/group/old-k8s-version/serial/FirstStart 613.55
282 TestStartStop/group/no-preload/serial/FirstStart 255.63
284 TestStartStop/group/embed-certs/serial/FirstStart 215.67
285 TestNetworkPlugins/group/kubenet/KubeletFlags 8.47
286 TestNetworkPlugins/group/kubenet/NetCatPod 24.05
287 TestNetworkPlugins/group/kubenet/DNS 0.65
288 TestNetworkPlugins/group/kubenet/Localhost 0.67
289 TestNetworkPlugins/group/kubenet/HairPin 0.66
291 TestStartStop/group/default-k8s-different-port/serial/FirstStart 185.14
292 TestStartStop/group/embed-certs/serial/DeployApp 18.13
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 6.68
294 TestStartStop/group/no-preload/serial/DeployApp 11.39
295 TestStartStop/group/embed-certs/serial/Stop 22.01
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 6.32
297 TestStartStop/group/no-preload/serial/Stop 20.46
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 6.48
299 TestStartStop/group/embed-certs/serial/SecondStart 442.86
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 5.96
301 TestStartStop/group/no-preload/serial/SecondStart 442.67
302 TestStartStop/group/default-k8s-different-port/serial/DeployApp 12.25
303 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 6.55
304 TestStartStop/group/default-k8s-different-port/serial/Stop 23.11
305 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 6.44
306 TestStartStop/group/default-k8s-different-port/serial/SecondStart 455.75
307 TestStartStop/group/old-k8s-version/serial/DeployApp 11.31
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 6.11
309 TestStartStop/group/old-k8s-version/serial/Stop 21
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 6.08
311 TestStartStop/group/old-k8s-version/serial/SecondStart 472.91
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.06
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.72
314 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 8.09
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 17.1
316 TestStartStop/group/embed-certs/serial/Pause 45.66
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.56
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 7.59
319 TestStartStop/group/no-preload/serial/Pause 58.31
321 TestStartStop/group/newest-cni/serial/FirstStart 171.27
322 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.17
323 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.77
324 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 7.98
325 TestStartStop/group/default-k8s-different-port/serial/Pause 49.34
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 6.24
328 TestStartStop/group/newest-cni/serial/Stop 23.58
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 6.03
330 TestStartStop/group/newest-cni/serial/SecondStart 84.08
331 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.05
332 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.57
333 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 7.89
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 8.21
338 TestStartStop/group/newest-cni/serial/Pause 50.4
x
+
TestDownloadOnly/v1.16.0/json-events (17.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220207191910-8704 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220207191910-8704 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (17.7750754s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220207191910-8704
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220207191910-8704: exit status 85 (337.998ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:19:12
	Running on machine: minikube3
	Binary: Built with gc go1.17.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:19:12.236326   13116 out.go:297] Setting OutFile to fd 644 ...
	I0207 19:19:12.292947   13116 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:19:12.292947   13116 out.go:310] Setting ErrFile to fd 648...
	I0207 19:19:12.292947   13116 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W0207 19:19:12.304786   13116 root.go:293] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0207 19:19:12.309071   13116 out.go:304] Setting JSON to true
	I0207 19:19:12.312200   13116 start.go:112] hostinfo: {"hostname":"minikube3","uptime":429171,"bootTime":1643832381,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 19:19:12.312200   13116 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 19:19:12.319144   13116 notify.go:174] Checking for updates...
	W0207 19:19:12.320189   13116 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0207 19:19:12.324168   13116 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:19:14.706606   13116 docker.go:132] docker version: linux-20.10.12
	I0207 19:19:14.711992   13116 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:19:16.662450   13116 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (1.9504479s)
	I0207 19:19:16.663484   13116 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:19:15.7984817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:19:16.681630   13116 start.go:281] selected driver: docker
	I0207 19:19:16.681630   13116 start.go:798] validating driver "docker" against <nil>
	I0207 19:19:16.697006   13116 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:19:18.639286   13116 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (1.9422704s)
	I0207 19:19:18.639286   13116 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:19:17.7541906 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:19:18.639286   13116 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:19:18.743291   13116 start_flags.go:369] Using suggested 16300MB memory alloc based on sys=65534MB, container=51412MB
	I0207 19:19:18.744326   13116 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 19:19:18.744326   13116 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0207 19:19:18.744326   13116 cni.go:93] Creating CNI manager for ""
	I0207 19:19:18.744326   13116 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:19:18.744326   13116 start_flags.go:302] config:
	{Name:download-only-20220207191910-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220207191910-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:19:18.757328   13116 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:19:18.759333   13116 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:19:18.760293   13116 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:19:18.816056   13116 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0207 19:19:18.816313   13116 cache.go:57] Caching tarball of preloaded images
	I0207 19:19:18.816489   13116 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:19:18.819155   13116 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:18.891187   13116 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:0c23f68e9d9de4489f09a530426fd1e3 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0207 19:19:20.112301   13116 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 to local cache
	I0207 19:19:20.112425   13116 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:19:20.112601   13116 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:19:20.112893   13116 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local cache directory
	I0207 19:19:20.113953   13116 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 to local cache
	I0207 19:19:22.549047   13116 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:22.671457   13116 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:24.000499   13116 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0207 19:19:24.001515   13116 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-20220207191910-8704\config.json ...
	I0207 19:19:24.002007   13116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-20220207191910-8704\config.json: {Name:mkfed96b028eaae616605e7be0d3f977dea44bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:19:24.003158   13116 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:19:24.005152   13116 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220207191910-8704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/json-events (15.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220207191910-8704 --force --alsologtostderr --kubernetes-version=v1.23.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220207191910-8704 --force --alsologtostderr --kubernetes-version=v1.23.3 --container-runtime=docker --driver=docker: (15.9576516s)
--- PASS: TestDownloadOnly/v1.23.3/json-events (15.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/preload-exists
--- PASS: TestDownloadOnly/v1.23.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/kubectl
--- PASS: TestDownloadOnly/v1.23.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220207191910-8704
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220207191910-8704: exit status 85 (304.3872ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:19:29
	Running on machine: minikube3
	Binary: Built with gc go1.17.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:19:28.978764    7524 out.go:297] Setting OutFile to fd 648 ...
	I0207 19:19:29.037764    7524 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:19:29.037764    7524 out.go:310] Setting ErrFile to fd 572...
	I0207 19:19:29.037764    7524 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W0207 19:19:29.051329    7524 root.go:293] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0207 19:19:29.052021    7524 out.go:304] Setting JSON to true
	I0207 19:19:29.061971    7524 start.go:112] hostinfo: {"hostname":"minikube3","uptime":429188,"bootTime":1643832381,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 19:19:29.062638    7524 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 19:19:29.365907    7524 notify.go:174] Checking for updates...
	I0207 19:19:29.519668    7524 config.go:176] Loaded profile config "download-only-20220207191910-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0207 19:19:29.519668    7524 start.go:706] api.Load failed for download-only-20220207191910-8704: filestore "download-only-20220207191910-8704": Docker machine "download-only-20220207191910-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:19:29.520210    7524 driver.go:344] Setting default libvirt URI to qemu:///system
	W0207 19:19:29.520377    7524 start.go:706] api.Load failed for download-only-20220207191910-8704: filestore "download-only-20220207191910-8704": Docker machine "download-only-20220207191910-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:19:31.906032    7524 docker.go:132] docker version: linux-20.10.12
	I0207 19:19:31.912121    7524 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:19:33.929055    7524 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.0169235s)
	I0207 19:19:33.929842    7524 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:19:32.9877525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:19:33.933103    7524 start.go:281] selected driver: docker
	I0207 19:19:33.933103    7524 start.go:798] validating driver "docker" against &{Name:download-only-20220207191910-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220207191910-8704 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:19:33.951109    7524 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:19:36.010186    7524 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.0588318s)
	I0207 19:19:36.010363    7524 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:19:35.068408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64
IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_b
ps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:19:36.060254    7524 cni.go:93] Creating CNI manager for ""
	I0207 19:19:36.060254    7524 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:19:36.060254    7524 start_flags.go:302] config:
	{Name:download-only-20220207191910-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:download-only-20220207191910-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:19:36.221481    7524 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:19:36.225592    7524 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:19:36.225592    7524 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:19:36.263535    7524 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:19:36.263570    7524 cache.go:57] Caching tarball of preloaded images
	I0207 19:19:36.263570    7524 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:19:36.265933    7524 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:36.333188    7524 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4?checksum=md5:1c52b21a02ef67e2e4434a0c47aabce7 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:19:37.530387    7524 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 to local cache
	I0207 19:19:37.530387    7524 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:19:37.530942    7524 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:19:37.530942    7524 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local cache directory
	I0207 19:19:37.531144    7524 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local cache directory, skipping pull
	I0207 19:19:37.531144    7524 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in cache, skipping pull
	I0207 19:19:37.531144    7524 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 as a tarball
	I0207 19:19:39.862078    7524 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:39.862333    7524 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:41.255623    7524 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 19:19:41.256352    7524 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-20220207191910-8704\config.json ...
	I0207 19:19:41.261584    7524 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:19:41.261780    7524 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.3/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.3/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\v1.23.3/kubectl.exe
	I0207 19:19:43.537966    7524 cache.go:208] Successfully downloaded all kic artifacts
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220207191910-8704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/json-events (15.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220207191910-8704 --force --alsologtostderr --kubernetes-version=v1.23.4-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220207191910-8704 --force --alsologtostderr --kubernetes-version=v1.23.4-rc.0 --container-runtime=docker --driver=docker: (15.8092826s)
--- PASS: TestDownloadOnly/v1.23.4-rc.0/json-events (15.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.4-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220207191910-8704
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220207191910-8704: exit status 85 (310.1577ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:19:45
	Running on machine: minikube3
	Binary: Built with gc go1.17.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:19:45.232254    9376 out.go:297] Setting OutFile to fd 672 ...
	I0207 19:19:45.287849    9376 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:19:45.287945    9376 out.go:310] Setting ErrFile to fd 664...
	I0207 19:19:45.287945    9376 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W0207 19:19:45.298106    9376 root.go:293] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0207 19:19:45.299297    9376 out.go:304] Setting JSON to true
	I0207 19:19:45.301724    9376 start.go:112] hostinfo: {"hostname":"minikube3","uptime":429204,"bootTime":1643832381,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 19:19:45.301724    9376 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 19:19:45.306754    9376 notify.go:174] Checking for updates...
	I0207 19:19:45.310145    9376 config.go:176] Loaded profile config "download-only-20220207191910-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	W0207 19:19:45.310145    9376 start.go:706] api.Load failed for download-only-20220207191910-8704: filestore "download-only-20220207191910-8704": Docker machine "download-only-20220207191910-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:19:45.311238    9376 driver.go:344] Setting default libvirt URI to qemu:///system
	W0207 19:19:45.311539    9376 start.go:706] api.Load failed for download-only-20220207191910-8704: filestore "download-only-20220207191910-8704": Docker machine "download-only-20220207191910-8704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:19:47.734902    9376 docker.go:132] docker version: linux-20.10.12
	I0207 19:19:47.741654    9376 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:19:49.731931    9376 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (1.9900901s)
	I0207 19:19:49.732628    9376 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:19:48.8163942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:19:50.379679    9376 start.go:281] selected driver: docker
	I0207 19:19:50.471784    9376 start.go:798] validating driver "docker" against &{Name:download-only-20220207191910-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:download-only-20220207191910-8704 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:19:50.489985    9376 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:19:52.494858    9376 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.004759s)
	I0207 19:19:52.495346    9376 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:19:51.5843485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:19:52.539061    9376 cni.go:93] Creating CNI manager for ""
	I0207 19:19:52.539139    9376 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:19:52.539139    9376 start_flags.go:302] config:
	{Name:download-only-20220207191910-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:download-only-20220207191910-8704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:19:52.728892    9376 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:19:52.731558    9376 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 19:19:52.731558    9376 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:19:52.766414    9376 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4-rc.0/preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4
	I0207 19:19:52.766414    9376 cache.go:57] Caching tarball of preloaded images
	I0207 19:19:52.767043    9376 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 19:19:52.769638    9376 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:52.872717    9376 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4-rc.0/preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d735572711ef4032ba979f3c4f19cb7e -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4
	I0207 19:19:54.011025    9376 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 to local cache
	I0207 19:19:54.011025    9376 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:19:54.011025    9376 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.29-1643823806-13302@sha256_9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8.tar
	I0207 19:19:54.011025    9376 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local cache directory
	I0207 19:19:54.011557    9376 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local cache directory, skipping pull
	I0207 19:19:54.011557    9376 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in cache, skipping pull
	I0207 19:19:54.011731    9376 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 as a tarball
	I0207 19:19:56.603900    9376 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:56.606215    9376 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:19:57.930078    9376 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4-rc.0 on docker
	I0207 19:19:57.931063    9376 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-20220207191910-8704\config.json ...
	I0207 19:19:57.933277    9376 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 19:19:57.933658    9376 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\v1.23.4-rc.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220207191910-8704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.4-rc.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (12.73s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:193: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (12.7263677s)
--- PASS: TestDownloadOnly/DeleteAll (12.73s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (7.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220207191910-8704
aaa_download_only_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220207191910-8704: (7.3184104s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (7.32s)

                                                
                                    
x
+
TestDownloadOnlyKic (57.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220207192028-8704 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220207192028-8704 --force --alsologtostderr --driver=docker: (47.7129312s)
helpers_test.go:176: Cleaning up "download-docker-20220207192028-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220207192028-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220207192028-8704: (8.6555087s)
--- PASS: TestDownloadOnlyKic (57.65s)

                                                
                                    
x
+
TestBinaryMirror (16.77s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220207192126-8704 --alsologtostderr --binary-mirror http://127.0.0.1:61942 --driver=docker
aaa_download_only_test.go:316: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220207192126-8704 --alsologtostderr --binary-mirror http://127.0.0.1:61942 --driver=docker: (8.0170099s)
helpers_test.go:176: Cleaning up "binary-mirror-20220207192126-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220207192126-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220207192126-8704: (8.5410557s)
--- PASS: TestBinaryMirror (16.77s)

                                                
                                    
x
+
TestOffline (285.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220207205647-8704 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20220207205647-8704 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (4m13.1265934s)
helpers_test.go:176: Cleaning up "offline-docker-20220207205647-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220207205647-8704

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220207205647-8704: (32.5042907s)
--- PASS: TestOffline (285.63s)

                                                
                                    
x
+
TestAddons/Setup (495.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220207192142-8704 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20220207192142-8704 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (8m15.4326944s)
--- PASS: TestAddons/Setup (495.43s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.32s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 43.8382ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-hkfc5" [5228fb58-09ee-4131-a27b-df92cd5d78ef] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0521767s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220207192142-8704 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220207192142-8704 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (19.3162503s)
addons_test.go:429: kubectl --context addons-20220207192142-8704 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable helm-tiller --alsologtostderr -v=1: (5.816817s)
--- PASS: TestAddons/parallel/HelmTiller (30.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 51.0116ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220207192142-8704 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220207192142-8704 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220207192142-8704 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [cf3250b4-04a6-4e7d-8cff-17cc3ff212d9] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [cf3250b4-04a6-4e7d-8cff-17cc3ff212d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [cf3250b4-04a6-4e7d-8cff-17cc3ff212d9] Running
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 30.1054827s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220207192142-8704 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220207192142-8704 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220207192142-8704 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220207192142-8704 delete pod task-pv-pod
addons_test.go:545: (dbg) Done: kubectl --context addons-20220207192142-8704 delete pod task-pv-pod: (1.7510004s)
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220207192142-8704 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220207192142-8704 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220207192142-8704 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220207192142-8704 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [09f09433-2e69-4e61-8625-28e4f02013ab] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [09f09433-2e69-4e61-8625-28e4f02013ab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [09f09433-2e69-4e61-8625-28e4f02013ab] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0252553s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220207192142-8704 delete pod task-pv-pod-restore
addons_test.go:577: (dbg) Done: kubectl --context addons-20220207192142-8704 delete pod task-pv-pod-restore: (1.6034567s)
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220207192142-8704 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220207192142-8704 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable csi-hostpath-driver --alsologtostderr -v=1: (14.3452604s)
addons_test.go:593: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:593: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable volumesnapshots --alsologtostderr -v=1: (5.9569873s)
--- PASS: TestAddons/parallel/CSI (69.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (29.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220207192142-8704 create -f testdata\busybox.yaml
addons_test.go:604: (dbg) Done: kubectl --context addons-20220207192142-8704 create -f testdata\busybox.yaml: (1.4928059s)
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [0afa8043-b26a-4a27-bbde-c703267a29ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [0afa8043-b26a-4a27-bbde-c703267a29ca] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.0721892s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220207192142-8704 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20220207192142-8704 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220207192142-8704 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220207192142-8704 addons disable gcp-auth --alsologtostderr -v=1: (15.6923128s)
--- PASS: TestAddons/serial/GCPAuth (29.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (25.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20220207192142-8704
addons_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20220207192142-8704: (20.3164872s)
addons_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220207192142-8704
addons_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220207192142-8704: (2.6833178s)
addons_test.go:141: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220207192142-8704
addons_test.go:141: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220207192142-8704: (2.6940572s)
--- PASS: TestAddons/StoppedEnableDisable (25.69s)

                                                
                                    
x
+
TestCertOptions (226.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220207211420-8704 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E0207 21:14:58.455572    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 21:15:18.725413    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20220207211420-8704 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (2m40.1293356s)
cert_options_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220207211420-8704 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20220207211420-8704 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (8.3521444s)
cert_options_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220207211420-8704 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-20220207211420-8704 -- "sudo cat /etc/kubernetes/admin.conf": (7.8473516s)
helpers_test.go:176: Cleaning up "cert-options-20220207211420-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220207211420-8704

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220207211420-8704: (48.8412278s)
--- PASS: TestCertOptions (226.64s)

                                                
                                    
x
+
TestCertExpiration (467.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220207205647-8704 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220207205647-8704 --memory=2048 --cert-expiration=3m --driver=docker: (3m33.3857169s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220207205647-8704 --memory=2048 --cert-expiration=8760h --driver=docker
E0207 21:03:55.669688    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
cert_options_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220207205647-8704 --memory=2048 --cert-expiration=8760h --driver=docker: (42.3603037s)
helpers_test.go:176: Cleaning up "cert-expiration-20220207205647-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220207205647-8704
E0207 21:04:23.360229    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220207205647-8704: (31.4490005s)
--- PASS: TestCertExpiration (467.20s)

                                                
                                    
x
+
TestForceSystemdFlag (263.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220207205647-8704 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220207205647-8704 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (3m32.9885704s)
docker_test.go:105: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220207205647-8704 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:105: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20220207205647-8704 ssh "docker info --format {{.CgroupDriver}}": (10.123009s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20220207205647-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220207205647-8704

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220207205647-8704: (40.1971317s)
--- PASS: TestForceSystemdFlag (263.31s)

                                                
                                    
x
+
TestErrorSpam/setup (132.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220207193651-8704 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 --driver=docker
error_spam_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20220207193651-8704 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 --driver=docker: (2m12.1671655s)
error_spam_test.go:89: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.3."
--- PASS: TestErrorSpam/setup (132.17s)

                                                
                                    
x
+
TestErrorSpam/start (21.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 start --dry-run: (7.3272057s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 start --dry-run: (7.1796782s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 start --dry-run
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 start --dry-run: (7.2686097s)
--- PASS: TestErrorSpam/start (21.78s)

                                                
                                    
x
+
TestErrorSpam/status (20.64s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 status: (6.9083561s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 status: (6.9397114s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 status
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 status: (6.7910407s)
--- PASS: TestErrorSpam/status (20.64s)

                                                
                                    
x
+
TestErrorSpam/pause (17.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 pause: (6.5215817s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 pause: (5.4274361s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 pause
E0207 19:39:58.424561    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:58.434434    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:58.445832    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:58.465910    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:58.506659    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:58.586923    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:58.747228    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:59.067753    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:39:59.710033    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:40:00.990304    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:40:03.550716    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 pause: (5.6437826s)
--- PASS: TestErrorSpam/pause (17.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (18.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 unpause
E0207 19:40:08.672271    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 unpause: (6.4718848s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 unpause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 unpause: (6.0705989s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 unpause
E0207 19:40:18.912775    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 unpause: (5.9246253s)
--- PASS: TestErrorSpam/unpause (18.47s)

                                                
                                    
x
+
TestErrorSpam/stop (35.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 stop
E0207 19:40:39.393628    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 stop: (19.613047s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 stop
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 stop: (7.6736034s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 stop
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220207193651-8704 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-20220207193651-8704 stop: (7.7674663s)
--- PASS: TestErrorSpam/stop (35.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1715: local sync path: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\8704\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (171.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2097: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0207 19:41:20.356253    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:42:42.277689    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
functional_test.go:2097: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (2m51.1367342s)
--- PASS: TestFunctional/serial/StartWithProxy (171.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --alsologtostderr -v=8: (37.5478126s)
functional_test.go:659: soft start took 37.5490349s for "functional-20220207194118-8704" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.23s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220207194118-8704 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (18.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1050: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add k8s.gcr.io/pause:3.1
functional_test.go:1050: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add k8s.gcr.io/pause:3.1: (6.1846646s)
functional_test.go:1050: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add k8s.gcr.io/pause:3.3
E0207 19:44:58.427076    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
functional_test.go:1050: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add k8s.gcr.io/pause:3.3: (5.9291139s)
functional_test.go:1050: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add k8s.gcr.io/pause:latest
functional_test.go:1050: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add k8s.gcr.io/pause:latest: (6.202766s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (18.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1081: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220207194118-8704 C:\Users\jenkins.minikube3\AppData\Local\Temp\functional-20220207194118-87041854388462
functional_test.go:1081: (dbg) Done: docker build -t minikube-local-cache-test:functional-20220207194118-8704 C:\Users\jenkins.minikube3\AppData\Local\Temp\functional-20220207194118-87041854388462: (2.4541872s)
functional_test.go:1093: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add minikube-local-cache-test:functional-20220207194118-8704
functional_test.go:1093: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache add minikube-local-cache-test:functional-20220207194118-8704: (5.5575233s)
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache delete minikube-local-cache-test:functional-20220207194118-8704
functional_test.go:1087: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220207194118-8704
functional_test.go:1087: (dbg) Done: docker rmi minikube-local-cache-test:functional-20220207194118-8704: (1.3059486s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1114: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1128: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo crictl images
functional_test.go:1128: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo crictl images: (6.7039529s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (25.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1151: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo docker rmi k8s.gcr.io/pause:latest
E0207 19:45:26.121811    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
functional_test.go:1151: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo docker rmi k8s.gcr.io/pause:latest: (6.7683287s)
functional_test.go:1157: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (6.5914302s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache reload
functional_test.go:1162: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cache reload: (5.9063065s)
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1167: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (6.6862478s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (25.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1176: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1176: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 kubectl -- --context functional-20220207194118-8704 get pods
functional_test.go:712: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 kubectl -- --context functional-20220207194118-8704 get pods: (2.3096154s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.31s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-20220207194118-8704 get pods
functional_test.go:737: (dbg) Done: out\kubectl.exe --context functional-20220207194118-8704 get pods: (1.7875917s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.79s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (64.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m4.8791453s)
functional_test.go:757: restart took 1m4.8791453s for "functional-20220207194118-8704" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (64.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220207194118-8704 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:836: kube-apiserver status: Ready
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1240: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 logs
functional_test.go:1240: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 logs: (7.9926834s)
--- PASS: TestFunctional/serial/LogsCmd (7.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (7.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\functional-20220207194118-87043844766734\logs.txt
functional_test.go:1257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\functional-20220207194118-87043844766734\logs.txt: (7.8345414s)
--- PASS: TestFunctional/serial/LogsFileCmd (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config get cpus
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config get cpus: exit status 14 (318.9562ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1203: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config set cpus 2
functional_test.go:1203: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config get cpus
functional_test.go:1203: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config unset cpus
functional_test.go:1203: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config get cpus
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 config get cpus: exit status 14 (313.5977ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (13.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.412602s)

                                                
                                                
-- stdout --
	* [functional-20220207194118-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:47:22.460966    8324 out.go:297] Setting OutFile to fd 808 ...
	I0207 19:47:22.534580    8324 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:47:22.534580    8324 out.go:310] Setting ErrFile to fd 552...
	I0207 19:47:22.534580    8324 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:47:22.548580    8324 out.go:304] Setting JSON to false
	I0207 19:47:22.551579    8324 start.go:112] hostinfo: {"hostname":"minikube3","uptime":430861,"bootTime":1643832381,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 19:47:22.551579    8324 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 19:47:22.555599    8324 out.go:176] * [functional-20220207194118-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 19:47:22.557589    8324 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 19:47:22.560636    8324 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 19:47:22.562579    8324 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:47:22.564597    8324 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:47:22.565580    8324 config.go:176] Loaded profile config "functional-20220207194118-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:47:22.566599    8324 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:47:25.298857    8324 docker.go:132] docker version: linux-20.10.12
	I0207 19:47:25.305872    8324 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:47:27.573059    8324 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.2671755s)
	I0207 19:47:27.574075    8324 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:51 SystemTime:2022-02-07 19:47:26.5398786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:47:27.580185    8324 out.go:176] * Using the docker driver based on existing profile
	I0207 19:47:27.580185    8324 start.go:281] selected driver: docker
	I0207 19:47:27.580185    8324 start.go:798] validating driver "docker" against &{Name:functional-20220207194118-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:functional-20220207194118-8704 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:47:27.580185    8324 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 19:47:27.629892    8324 out.go:176] 
	W0207 19:47:27.630893    8324 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0207 19:47:27.634875    8324 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --dry-run --alsologtostderr -v=1 --driver=docker: (7.8350184s)
--- PASS: TestFunctional/parallel/DryRun (13.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220207194118-8704 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.5972326s)

                                                
                                                
-- stdout --
	* [functional-20220207194118-8704] minikube v1.25.1 sur Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:47:16.882429    8004 out.go:297] Setting OutFile to fd 424 ...
	I0207 19:47:16.951924    8004 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:47:16.951977    8004 out.go:310] Setting ErrFile to fd 740...
	I0207 19:47:16.951977    8004 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:47:16.964928    8004 out.go:304] Setting JSON to false
	I0207 19:47:16.967785    8004 start.go:112] hostinfo: {"hostname":"minikube3","uptime":430856,"bootTime":1643832380,"procs":159,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0207 19:47:16.967936    8004 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0207 19:47:16.971562    8004 out.go:176] * [functional-20220207194118-8704] minikube v1.25.1 sur Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I0207 19:47:16.973943    8004 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0207 19:47:16.976726    8004 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0207 19:47:16.979165    8004 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:47:16.980940    8004 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:47:16.982377    8004 config.go:176] Loaded profile config "functional-20220207194118-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:47:16.982915    8004 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:47:19.839934    8004 docker.go:132] docker version: linux-20.10.12
	I0207 19:47:19.849268    8004 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:47:22.109666    8004 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.2603868s)
	I0207 19:47:22.109666    8004 info.go:263] docker info: {ID:ZKNM:ATTJ:4PAF:XZ2I:VNK7:GSIP:G4OM:OVT3:Q77Z:5KEJ:P45I:LYQV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:51 SystemTime:2022-02-07 19:47:21.1070421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53910265856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_
bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
	I0207 19:47:22.114976    8004 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0207 19:47:22.114976    8004 start.go:281] selected driver: docker
	I0207 19:47:22.114976    8004 start.go:798] validating driver "docker" against &{Name:functional-20220207194118-8704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:functional-20220207194118-8704 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:47:22.115667    8004 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0207 19:47:22.228669    8004 out.go:176] 
	W0207 19:47:22.228669    8004 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0207 19:47:22.230660    8004 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.60s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (22.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 status: (7.5811316s)
functional_test.go:861: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (7.4383839s)
functional_test.go:873: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:873: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 status -o json: (7.5774386s)
--- PASS: TestFunctional/parallel/StatusCmd (22.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1549: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1549: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 addons list: (3.0440328s)
functional_test.go:1561: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [d5dc8e4b-bbcd-4cb0-8efe-009602c53bcf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0475528s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220207194118-8704 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220207194118-8704 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220207194118-8704 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220207194118-8704 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [61c90dca-e28a-4b60-8fc7-69b9f4975509] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [61c90dca-e28a-4b60-8fc7-69b9f4975509] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.0831676s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:101: (dbg) Done: kubectl --context functional-20220207194118-8704 exec sp-pod -- touch /tmp/mount/foo: (1.5111238s)
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220207194118-8704 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220207194118-8704 delete -f testdata/storage-provisioner/pod.yaml: (2.4226131s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220207194118-8704 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [ee8242c9-bd77-4ea3-a074-afcc5fcc43e8] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [ee8242c9-bd77-4ea3-a074-afcc5fcc43e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [ee8242c9-bd77-4ea3-a074-afcc5fcc43e8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0522934s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (15.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1584: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "echo hello"
functional_test.go:1584: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "echo hello": (7.7061145s)
functional_test.go:1601: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1601: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "cat /etc/hostname": (7.5802049s)
--- PASS: TestFunctional/parallel/SSHCmd (15.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (28.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cp testdata\cp-test.txt /home/docker/cp-test.txt: (6.1568629s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh -n functional-20220207194118-8704 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh -n functional-20220207194118-8704 "sudo cat /home/docker/cp-test.txt": (7.228055s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cp functional-20220207194118-8704:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_test1411547270\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 cp functional-20220207194118-8704:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_test1411547270\cp-test.txt: (7.3063491s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh -n functional-20220207194118-8704 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh -n functional-20220207194118-8704 "sudo cat /home/docker/cp-test.txt": (7.3640635s)
--- PASS: TestFunctional/parallel/CpCmd (28.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (72.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1653: (dbg) Run:  kubectl --context functional-20220207194118-8704 replace --force -f testdata\mysql.yaml
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-2jxcb" [cb4606c5-1ff2-4125-b4b0-6ac92137deb6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-2jxcb" [cb4606c5-1ff2-4125-b4b0-6ac92137deb6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 47.0719304s
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;": exit status 1 (686.3324ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;": exit status 1 (607.7621ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;": exit status 1 (556.3134ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;": exit status 1 (673.266ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;": exit status 1 (934.8181ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;": exit status 1 (586.15ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207194118-8704 exec mysql-b87c45988-2jxcb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (72.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1789: Checking for existence of /etc/test/nested/copy/8704/hosts within VM
functional_test.go:1791: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/test/nested/copy/8704/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1791: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/test/nested/copy/8704/hosts": (7.733874s)
functional_test.go:1796: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1832: Checking for existence of /etc/ssl/certs/8704.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/8704.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1833: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/8704.pem": (7.526642s)
functional_test.go:1832: Checking for existence of /usr/share/ca-certificates/8704.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /usr/share/ca-certificates/8704.pem"
functional_test.go:1833: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /usr/share/ca-certificates/8704.pem": (7.3990099s)
functional_test.go:1832: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1833: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1833: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/51391683.0": (7.3260334s)
functional_test.go:1859: Checking for existence of /etc/ssl/certs/87042.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/87042.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/87042.pem": (7.4246922s)
functional_test.go:1859: Checking for existence of /usr/share/ca-certificates/87042.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /usr/share/ca-certificates/87042.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /usr/share/ca-certificates/87042.pem": (7.2697095s)
functional_test.go:1859: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.0560008s)
--- PASS: TestFunctional/parallel/CertSync (46.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220207194118-8704 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1887: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh "sudo systemctl is-active crio": exit status 1 (7.5486744s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1280: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1280: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.3370495s)
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.3092398s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1320: (dbg) Done: out/minikube-windows-amd64.exe profile list: (7.2829934s)
functional_test.go:1325: Took "7.2829934s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1334: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1339: Took "307.3313ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1371: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1371: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (7.3424123s)
functional_test.go:1376: Took "7.3424123s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1384: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1389: Took "319.7671ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 version --short
--- PASS: TestFunctional/parallel/Version/short (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2133: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2133: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 version -o=json --components: (10.6015773s)
--- PASS: TestFunctional/parallel/Version/components (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (5.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format short: (5.5744212s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.3
k8s.gcr.io/kube-proxy:v1.23.3
k8s.gcr.io/kube-controller-manager:v1.23.3
k8s.gcr.io/kube-apiserver:v1.23.3
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20220207194118-8704
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-20220207194118-8704
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (5.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format table: (4.3466616s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | v2.3.1                         | e1482a24335a6 | 220MB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-20220207194118-8704 | e73bb2f29680c | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-20220207194118-8704 | a52624f2bbc04 | 30B    |
| docker.io/library/nginx                     | latest                         | c316d5a335a5c | 142MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.3                        | f40be0088a83e | 135MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| docker.io/library/mysql                     | 5.7                            | 0712d5dc1b147 | 448MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.3                        | 99a3486be4f28 | 53.5MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/kube-proxy                       | v1.23.3                        | 9b7cc99821098 | 112MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.3                        | b07520cd7ab76 | 125MB  |
| docker.io/library/nginx                     | alpine                         | bef258acf10dc | 23.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220207194118-8704 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | latest                         | beae173ccac6a | 1.24MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | 7801cfc6d5c07 | 34.4MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format json: (4.4996025s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format json:
[{"id":"0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"448000000"},{"id":"bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"9b7cc9982109819e8fe5b0b6c0d3122790f88275e13b02f79e7e9e307466aa1b","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.3"],"size":"112000000"},{"id":"b07520cd7ab76ec98ea6c07ae56d21d65f29708c24f90a55a3c30d823419577e","repoDigests":[],"repo
Tags":["k8s.gcr.io/kube-controller-manager:v1.23.3"],"size":"125000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"e73bb2f29680c4201724bfd64926b78f0af9198d189a24fa6e52b0d1856ef852","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220207194118-8704"],"size":"1240000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"
34400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"a52624f2bbc0495b7fc2f362015b2d4c0b7289f11d42311197f7bc104b495144","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220207194118-8704"],"size":"30"},{"id":"c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"f40be0088a83e79642d0a2a1bbc55e61b9289167385e67701b82ea85fc9bbfc4","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.3"],"size":"135000000"},{"id":"99a3486be4f2837c939313935007928f97b81a1cf11495808d81ad6b14c04078","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.3"],"size":"53500000"},{"id":"ffd4cfbbe753e62419e129ee2ac
618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220207194118-8704"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (4.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format yaml: (4.4757592s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls --format yaml:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "448000000"
- id: f40be0088a83e79642d0a2a1bbc55e61b9289167385e67701b82ea85fc9bbfc4
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.3
size: "135000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 9b7cc9982109819e8fe5b0b6c0d3122790f88275e13b02f79e7e9e307466aa1b
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.3
size: "112000000"
- id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: a52624f2bbc0495b7fc2f362015b2d4c0b7289f11d42311197f7bc104b495144
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220207194118-8704
size: "30"
- id: b07520cd7ab76ec98ea6c07ae56d21d65f29708c24f90a55a3c30d823419577e
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.3
size: "125000000"
- id: 99a3486be4f2837c939313935007928f97b81a1cf11495808d81ad6b14c04078
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.3
size: "53500000"
- id: e73bb2f29680c4201724bfd64926b78f0af9198d189a24fa6e52b0d1856ef852
repoDigests: []
repoTags:
- docker.io/localhost/my-image:functional-20220207194118-8704
size: "1240000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220207194118-8704
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1240000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (21.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 ssh pgrep buildkitd: exit status 1 (7.6498834s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image build -t localhost/my-image:functional-20220207194118-8704 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image build -t localhost/my-image:functional-20220207194118-8704 testdata\build: (8.9058499s)
functional_test.go:316: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image build -t localhost/my-image:functional-20220207194118-8704 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 677c8f5a03ef
Removing intermediate container 677c8f5a03ef
---> 32d4113c501a
Step 3/3 : ADD content.txt /
---> e73bb2f29680
Successfully built e73bb2f29680
Successfully tagged localhost/my-image:functional-20220207194118-8704
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls: (4.7710102s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (21.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (6.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.9607872s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220207194118-8704
functional_test.go:343: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (1.3532272s)
--- PASS: TestFunctional/parallel/ImageCommands/Setup (6.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (29.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220207194118-8704 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220207194118-8704"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220207194118-8704 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220207194118-8704": (17.9449214s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220207194118-8704 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220207194118-8704 docker-env | Invoke-Expression ; docker images": (11.8508457s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (29.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (18.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (14.1033717s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls: (4.6477674s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (18.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (15.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (10.2011909s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls: (4.8984245s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (15.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (4.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1979: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 update-context --alsologtostderr -v=2
E0207 19:49:58.427981    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1979: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 update-context --alsologtostderr -v=2: (4.5183931s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1979: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1979: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 update-context --alsologtostderr -v=2: (4.2960881s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1979: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 update-context --alsologtostderr -v=2
functional_test.go:1979: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 update-context --alsologtostderr -v=2: (4.288539s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (28.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.1138297s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220207194118-8704
functional_test.go:236: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (1.7852518s)
functional_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (16.6307175s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls: (5.3422012s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (28.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220207194118-8704 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220207194118-8704 apply -f testdata\testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [7162c13e-10d1-4c06-b926-49589fbc79f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [7162c13e-10d1-4c06-b926-49589fbc79f2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0615493s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image save gcr.io/google-containers/addon-resizer:functional-20220207194118-8704 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image save gcr.io/google-containers/addon-resizer:functional-20220207194118-8704 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (9.480165s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image rm gcr.io/google-containers/addon-resizer:functional-20220207194118-8704

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image rm gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (4.556435s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls: (4.4470327s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (9.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (12.1021175s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image ls: (4.6437453s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (13.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220207194118-8704

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Done: docker rmi gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (1.3767381s)
functional_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (10.6991288s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220207194118-8704
functional_test.go:425: (dbg) Done: docker image inspect gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (1.3642555s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (13.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220207194118-8704 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to kill pid 9732: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (4.01s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: (2.3244501s)
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220207194118-8704
functional_test.go:186: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220207194118-8704: (1.6693643s)
--- PASS: TestFunctional/delete_addon-resizer_images (4.01s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (1.34s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220207194118-8704
functional_test.go:194: (dbg) Done: docker rmi -f localhost/my-image:functional-20220207194118-8704: (1.3359716s)
--- PASS: TestFunctional/delete_my-image_image (1.34s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (1.3s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220207194118-8704
functional_test.go:202: (dbg) Done: docker rmi -f minikube-local-cache-test:functional-20220207194118-8704: (1.2930805s)
--- PASS: TestFunctional/delete_minikube_cached_images (1.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (171.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220207195307-8704 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0207 19:53:14.626341    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:14.632519    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:14.642903    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:14.663686    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:14.704062    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:14.785166    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:14.945818    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:15.266346    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:15.906743    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:17.187843    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:19.749051    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:24.870752    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:35.111747    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:53:55.592812    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:54:36.553851    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:54:58.431467    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 19:55:58.475666    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220207195307-8704 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m51.5893829s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (171.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (49.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220207195307-8704 addons enable ingress --alsologtostderr -v=5
E0207 19:56:21.486865    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220207195307-8704 addons enable ingress --alsologtostderr -v=5: (49.8917459s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (49.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.92s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220207195307-8704 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220207195307-8704 addons enable ingress-dns --alsologtostderr -v=5: (4.9195957s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (172.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220207195804-8704 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0207 19:58:14.627372    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:58:42.317466    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 19:59:58.431739    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20220207195804-8704 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (2m52.2074804s)
--- PASS: TestJSONOutput/start/Command (172.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220207195804-8704 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20220207195804-8704 --output=json --user=testUser: (6.5412266s)
--- PASS: TestJSONOutput/pause/Command (6.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (6.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220207195804-8704 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20220207195804-8704 --output=json --user=testUser: (6.4472745s)
--- PASS: TestJSONOutput/unpause/Command (6.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (19.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220207195804-8704 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20220207195804-8704 --output=json --user=testUser: (19.8436862s)
--- PASS: TestJSONOutput/stop/Command (19.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.56s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220207200150-8704 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220207200150-8704 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (290.822ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"777a1caa-8da3-467b-b097-8f6144ae7355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220207200150-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fb49432-a842-47e5-ae8c-67a91f2871d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"ad606b59-f8cb-4a86-930c-f880be0db445","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2c82ffe8-830a-4397-84d6-e265f6924d68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13439"}}
	{"specversion":"1.0","id":"7f12f58d-b7f2-49fb-9b02-a79925a57797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"94ee3c10-5997-4cd0-a608-67ef5065aa15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220207200150-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220207200150-8704
E0207 20:01:53.845877    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:53.852005    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:53.862593    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:53.883005    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:53.923750    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:54.004254    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:54.165494    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:54.487020    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:55.128232    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:01:56.408685    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220207200150-8704: (7.2659732s)
--- PASS: TestErrorJSONOutput (7.56s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (159.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220207200158-8704 --network=
E0207 20:01:58.970019    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:02:04.090541    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:02:14.331182    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:02:34.812082    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:03:14.629115    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 20:03:15.773066    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220207200158-8704 --network=: (2m14.7038606s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:102: (dbg) Done: docker network ls --format {{.Name}}: (1.1959206s)
helpers_test.go:176: Cleaning up "docker-network-20220207200158-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220207200158-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220207200158-8704: (23.5493893s)
--- PASS: TestKicCustomNetwork/create_custom_network (159.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (149.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220207200437-8704 --network=bridge
E0207 20:04:37.695556    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:04:58.433365    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220207200437-8704 --network=bridge: (2m8.6978552s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:102: (dbg) Done: docker network ls --format {{.Name}}: (1.228854s)
helpers_test.go:176: Cleaning up "docker-network-20220207200437-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220207200437-8704
E0207 20:06:53.848127    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220207200437-8704: (19.4043231s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (149.34s)

                                                
                                    
x
+
TestKicExistingNetwork (166.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:102: (dbg) Done: docker network ls --format {{.Name}}: (1.1547116s)
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20220207200712-8704 --network=existing-network
E0207 20:07:21.537189    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:08:14.630813    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:94: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20220207200712-8704 --network=existing-network: (2m13.6721585s)
helpers_test.go:176: Cleaning up "existing-network-20220207200712-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20220207200712-8704
E0207 20:09:37.681090    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20220207200712-8704: (24.22309s)
--- PASS: TestKicExistingNetwork (166.06s)

                                                
                                    
x
+
TestMainNoArgs (0.29s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (58.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220207200953-8704 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
E0207 20:09:58.434632    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
mount_start_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-20220207200953-8704 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (57.4416653s)
--- PASS: TestMountStart/serial/StartWithMountFirst (58.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (6.69s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20220207200953-8704 ssh -- ls /minikube-host
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-20220207200953-8704 ssh -- ls /minikube-host: (6.6906239s)
--- PASS: TestMountStart/serial/VerifyMountFirst (6.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (59.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220207200953-8704 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E0207 20:11:53.849474    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
mount_start_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220207200953-8704 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (58.5744917s)
--- PASS: TestMountStart/serial/StartWithMountSecond (59.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (6.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220207200953-8704 ssh -- ls /minikube-host
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220207200953-8704 ssh -- ls /minikube-host: (6.3700831s)
--- PASS: TestMountStart/serial/VerifyMountSecond (6.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (21.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20220207200953-8704 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20220207200953-8704 --alsologtostderr -v=5: (21.4914215s)
--- PASS: TestMountStart/serial/DeleteFirst (21.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (6.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220207200953-8704 ssh -- ls /minikube-host
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220207200953-8704 ssh -- ls /minikube-host: (6.4573s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (6.46s)

                                                
                                    
x
+
TestMountStart/serial/Stop (9.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20220207200953-8704
mount_start_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-20220207200953-8704: (9.2348852s)
--- PASS: TestMountStart/serial/Stop (9.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (32.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220207200953-8704
E0207 20:13:01.493460    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
mount_start_test.go:167: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220207200953-8704: (31.935241s)
--- PASS: TestMountStart/serial/RestartStopped (32.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (6.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220207200953-8704 ssh -- ls /minikube-host
E0207 20:13:14.632449    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220207200953-8704 ssh -- ls /minikube-host: (6.3715062s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (6.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (305.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0207 20:14:58.437768    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 20:16:53.850502    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:18:14.634279    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 20:18:16.901912    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (4m52.9951142s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr: (12.874981s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (305.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (24.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.9707468s)
multinode_test.go:491: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- rollout status deployment/busybox: (3.8426727s)
multinode_test.go:497: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:497: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- get pods -o jsonpath='{.items[*].status.podIP}': (1.8677035s)
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:509: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.8650665s)
multinode_test.go:517: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- nslookup kubernetes.io: (3.3623479s)
multinode_test.go:517: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- nslookup kubernetes.io: (3.0014929s)
multinode_test.go:527: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- nslookup kubernetes.default: (2.0636481s)
multinode_test.go:527: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- nslookup kubernetes.default: (1.9910299s)
multinode_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- nslookup kubernetes.default.svc.cluster.local: (1.9574035s)
multinode_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- nslookup kubernetes.default.svc.cluster.local: (2.0441292s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (24.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (9.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:545: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.7577495s)
multinode_test.go:553: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:553: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.0189675s)
multinode_test.go:561: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:561: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-4rsvz -- sh -c "ping -c 1 192.168.65.2": (2.0225291s)
multinode_test.go:553: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:553: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1.9580784s)
multinode_test.go:561: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:561: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220207201346-8704 -- exec busybox-7978565885-v5n8k -- sh -c "ping -c 1 192.168.65.2": (1.99432s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (9.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (143.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220207201346-8704 -v 3 --alsologtostderr
E0207 20:19:58.438707    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20220207201346-8704 -v 3 --alsologtostderr: (2m8.5162392s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr: (14.8642407s)
--- PASS: TestMultiNode/serial/AddNode (143.38s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (6.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0207 20:21:53.851641    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.8378303s)
--- PASS: TestMultiNode/serial/ProfileList (6.84s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (235.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --output json --alsologtostderr: (14.957371s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp testdata\cp-test.txt multinode-20220207201346-8704:/home/docker/cp-test.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp testdata\cp-test.txt multinode-20220207201346-8704:/home/docker/cp-test.txt: (6.8072803s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt": (6.8393418s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_cp_test2172216992\cp-test_multinode-20220207201346-8704.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_cp_test2172216992\cp-test_multinode-20220207201346-8704.txt: (6.6440901s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt": (6.727842s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704:/home/docker/cp-test.txt multinode-20220207201346-8704-m02:/home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m02.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704:/home/docker/cp-test.txt multinode-20220207201346-8704-m02:/home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m02.txt: (9.5684693s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt": (6.8220971s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m02.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m02.txt": (6.7272261s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704:/home/docker/cp-test.txt multinode-20220207201346-8704-m03:/home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m03.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704:/home/docker/cp-test.txt multinode-20220207201346-8704-m03:/home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m03.txt: (9.8121052s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt"
E0207 20:23:14.636354    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test.txt": (6.7894523s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m03.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704_multinode-20220207201346-8704-m03.txt": (6.7326877s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp testdata\cp-test.txt multinode-20220207201346-8704-m02:/home/docker/cp-test.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp testdata\cp-test.txt multinode-20220207201346-8704-m02:/home/docker/cp-test.txt: (6.731283s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt": (6.7350234s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_cp_test2172216992\cp-test_multinode-20220207201346-8704-m02.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_cp_test2172216992\cp-test_multinode-20220207201346-8704-m02.txt: (6.9765611s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt": (6.9681496s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m02:/home/docker/cp-test.txt multinode-20220207201346-8704:/home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m02:/home/docker/cp-test.txt multinode-20220207201346-8704:/home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704.txt: (9.383102s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt": (6.7602984s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704.txt": (6.9975067s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m02:/home/docker/cp-test.txt multinode-20220207201346-8704-m03:/home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704-m03.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m02:/home/docker/cp-test.txt multinode-20220207201346-8704-m03:/home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704-m03.txt: (9.3751309s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test.txt": (6.8036563s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704-m03.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m02_multinode-20220207201346-8704-m03.txt": (6.7789708s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp testdata\cp-test.txt multinode-20220207201346-8704-m03:/home/docker/cp-test.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp testdata\cp-test.txt multinode-20220207201346-8704-m03:/home/docker/cp-test.txt: (6.7038128s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt": (6.7220969s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_cp_test2172216992\cp-test_multinode-20220207201346-8704-m03.txt
E0207 20:24:58.438925    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\mk_cp_test2172216992\cp-test_multinode-20220207201346-8704-m03.txt: (6.746716s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt": (6.9632918s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m03:/home/docker/cp-test.txt multinode-20220207201346-8704:/home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m03:/home/docker/cp-test.txt multinode-20220207201346-8704:/home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704.txt: (9.6162679s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt": (6.7693704s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704.txt": (6.7217825s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m03:/home/docker/cp-test.txt multinode-20220207201346-8704-m02:/home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704-m02.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 cp multinode-20220207201346-8704-m03:/home/docker/cp-test.txt multinode-20220207201346-8704-m02:/home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704-m02.txt: (9.4756589s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m03 "sudo cat /home/docker/cp-test.txt": (6.631208s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704-m02.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 ssh -n multinode-20220207201346-8704-m02 "sudo cat /home/docker/cp-test_multinode-20220207201346-8704-m03_multinode-20220207201346-8704-m02.txt": (6.8353451s)
--- PASS: TestMultiNode/serial/CopyFile (235.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (32.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 node stop m03: (8.2335649s)
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status: exit status 7 (12.1124501s)

                                                
                                                
-- stdout --
	multinode-20220207201346-8704
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220207201346-8704-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220207201346-8704-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr
E0207 20:26:17.688618    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr: exit status 7 (12.0461525s)

                                                
                                                
-- stdout --
	multinode-20220207201346-8704
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220207201346-8704-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220207201346-8704-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 20:26:13.466196   12108 out.go:297] Setting OutFile to fd 792 ...
	I0207 20:26:13.534191   12108 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 20:26:13.534191   12108 out.go:310] Setting ErrFile to fd 768...
	I0207 20:26:13.534191   12108 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 20:26:13.571306   12108 out.go:304] Setting JSON to false
	I0207 20:26:13.571306   12108 mustload.go:65] Loading cluster: multinode-20220207201346-8704
	I0207 20:26:13.571946   12108 config.go:176] Loaded profile config "multinode-20220207201346-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 20:26:13.571946   12108 status.go:253] checking status of multinode-20220207201346-8704 ...
	I0207 20:26:13.583624   12108 cli_runner.go:133] Run: docker container inspect multinode-20220207201346-8704 --format={{.State.Status}}
	I0207 20:26:16.079317   12108 cli_runner.go:186] Completed: docker container inspect multinode-20220207201346-8704 --format={{.State.Status}}: (2.49568s)
	I0207 20:26:16.079317   12108 status.go:328] multinode-20220207201346-8704 host status = "Running" (err=<nil>)
	I0207 20:26:16.079599   12108 host.go:66] Checking if "multinode-20220207201346-8704" exists ...
	I0207 20:26:16.086706   12108 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220207201346-8704
	I0207 20:26:17.331022   12108 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220207201346-8704: (1.2441334s)
	I0207 20:26:17.331116   12108 host.go:66] Checking if "multinode-20220207201346-8704" exists ...
	I0207 20:26:17.340768   12108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 20:26:17.345888   12108 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220207201346-8704
	I0207 20:26:18.578156   12108 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220207201346-8704: (1.2321148s)
	I0207 20:26:18.578390   12108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63438 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-20220207201346-8704\id_rsa Username:docker}
	I0207 20:26:18.663716   12108 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3229414s)
	I0207 20:26:18.671716   12108 ssh_runner.go:195] Run: systemctl --version
	I0207 20:26:18.688749   12108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 20:26:18.718720   12108 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220207201346-8704
	I0207 20:26:19.953822   12108 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220207201346-8704: (1.2349778s)
	I0207 20:26:19.956011   12108 kubeconfig.go:92] found "multinode-20220207201346-8704" server: "https://127.0.0.1:63442"
	I0207 20:26:19.956011   12108 api_server.go:165] Checking apiserver status ...
	I0207 20:26:19.964085   12108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 20:26:20.029619   12108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1707/cgroup
	I0207 20:26:20.059196   12108 api_server.go:181] apiserver freezer: "20:freezer:/docker/d4f48adb24065d2e13432a19203f8f88ed3895dc9210c0f138ad39b270e945c3/kubepods/burstable/pod81ce50effbafc80fa482bff0176d6270/740f7055b9a7985914ba6c78fb6b0f164d21c0aae095e331d5d781857ca9d0e7"
	I0207 20:26:20.068131   12108 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d4f48adb24065d2e13432a19203f8f88ed3895dc9210c0f138ad39b270e945c3/kubepods/burstable/pod81ce50effbafc80fa482bff0176d6270/740f7055b9a7985914ba6c78fb6b0f164d21c0aae095e331d5d781857ca9d0e7/freezer.state
	I0207 20:26:20.095387   12108 api_server.go:203] freezer state: "THAWED"
	I0207 20:26:20.095387   12108 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63442/healthz ...
	I0207 20:26:20.113372   12108 api_server.go:266] https://127.0.0.1:63442/healthz returned 200:
	ok
	I0207 20:26:20.113372   12108 status.go:419] multinode-20220207201346-8704 apiserver status = Running (err=<nil>)
	I0207 20:26:20.113372   12108 status.go:255] multinode-20220207201346-8704 status: &{Name:multinode-20220207201346-8704 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0207 20:26:20.113372   12108 status.go:253] checking status of multinode-20220207201346-8704-m02 ...
	I0207 20:26:20.123404   12108 cli_runner.go:133] Run: docker container inspect multinode-20220207201346-8704-m02 --format={{.State.Status}}
	I0207 20:26:21.402473   12108 cli_runner.go:186] Completed: docker container inspect multinode-20220207201346-8704-m02 --format={{.State.Status}}: (1.2789388s)
	I0207 20:26:21.402708   12108 status.go:328] multinode-20220207201346-8704-m02 host status = "Running" (err=<nil>)
	I0207 20:26:21.402708   12108 host.go:66] Checking if "multinode-20220207201346-8704-m02" exists ...
	I0207 20:26:21.408506   12108 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220207201346-8704-m02
	I0207 20:26:22.605320   12108 cli_runner.go:186] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220207201346-8704-m02: (1.1966166s)
	I0207 20:26:22.605463   12108 host.go:66] Checking if "multinode-20220207201346-8704-m02" exists ...
	I0207 20:26:22.613233   12108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 20:26:22.618156   12108 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220207201346-8704-m02
	I0207 20:26:23.853446   12108 cli_runner.go:186] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220207201346-8704-m02: (1.2352842s)
	I0207 20:26:23.853446   12108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63519 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-20220207201346-8704-m02\id_rsa Username:docker}
	I0207 20:26:23.987736   12108 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3744955s)
	I0207 20:26:23.998327   12108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 20:26:24.029897   12108 status.go:255] multinode-20220207201346-8704-m02 status: &{Name:multinode-20220207201346-8704-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0207 20:26:24.029897   12108 status.go:253] checking status of multinode-20220207201346-8704-m03 ...
	I0207 20:26:24.049906   12108 cli_runner.go:133] Run: docker container inspect multinode-20220207201346-8704-m03 --format={{.State.Status}}
	I0207 20:26:25.285161   12108 cli_runner.go:186] Completed: docker container inspect multinode-20220207201346-8704-m03 --format={{.State.Status}}: (1.2351936s)
	I0207 20:26:25.285161   12108 status.go:328] multinode-20220207201346-8704-m03 host status = "Stopped" (err=<nil>)
	I0207 20:26:25.285161   12108 status.go:341] host is not running, skipping remaining checks
	I0207 20:26:25.285161   12108 status.go:255] multinode-20220207201346-8704-m03 status: &{Name:multinode-20220207201346-8704-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (32.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (62.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:249: (dbg) Done: docker version -f {{.Server.Version}}: (1.1940069s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 node start m03 --alsologtostderr
E0207 20:26:53.854476    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 node start m03 --alsologtostderr: (46.4207384s)
multinode_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status
multinode_test.go:266: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status: (14.7464868s)
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (62.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (248.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220207201346-8704
multinode_test.go:295: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220207201346-8704
multinode_test.go:295: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20220207201346-8704: (42.8610582s)
multinode_test.go:300: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704 --wait=true -v=8 --alsologtostderr
E0207 20:28:14.637410    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 20:29:41.499874    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 20:29:58.442419    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
multinode_test.go:300: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704 --wait=true -v=8 --alsologtostderr: (3m24.7518091s)
multinode_test.go:305: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220207201346-8704
--- PASS: TestMultiNode/serial/RestartKeepsNodes (248.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (49.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 node delete m03
E0207 20:31:53.855403    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
multinode_test.go:399: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 node delete m03: (36.7850567s)
multinode_test.go:405: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr
multinode_test.go:405: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr: (10.7865102s)
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:419: (dbg) Done: docker volume ls: (1.2169379s)
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (49.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (44.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 stop
multinode_test.go:319: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 stop: (36.5260384s)
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status: exit status 7 (3.7802478s)

                                                
                                                
-- stdout --
	multinode-20220207201346-8704
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220207201346-8704-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr: exit status 7 (3.8579735s)

                                                
                                                
-- stdout --
	multinode-20220207201346-8704
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220207201346-8704-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 20:33:06.142817    4972 out.go:297] Setting OutFile to fd 776 ...
	I0207 20:33:06.196576    4972 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 20:33:06.196576    4972 out.go:310] Setting ErrFile to fd 436...
	I0207 20:33:06.196576    4972 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 20:33:06.206968    4972 out.go:304] Setting JSON to false
	I0207 20:33:06.206968    4972 mustload.go:65] Loading cluster: multinode-20220207201346-8704
	I0207 20:33:06.207610    4972 config.go:176] Loaded profile config "multinode-20220207201346-8704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 20:33:06.207610    4972 status.go:253] checking status of multinode-20220207201346-8704 ...
	I0207 20:33:06.217594    4972 cli_runner.go:133] Run: docker container inspect multinode-20220207201346-8704 --format={{.State.Status}}
	I0207 20:33:08.569788    4972 cli_runner.go:186] Completed: docker container inspect multinode-20220207201346-8704 --format={{.State.Status}}: (2.3520322s)
	I0207 20:33:08.569865    4972 status.go:328] multinode-20220207201346-8704 host status = "Stopped" (err=<nil>)
	I0207 20:33:08.569865    4972 status.go:341] host is not running, skipping remaining checks
	I0207 20:33:08.569936    4972 status.go:255] multinode-20220207201346-8704 status: &{Name:multinode-20220207201346-8704 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0207 20:33:08.569936    4972 status.go:253] checking status of multinode-20220207201346-8704-m02 ...
	I0207 20:33:08.585534    4972 cli_runner.go:133] Run: docker container inspect multinode-20220207201346-8704-m02 --format={{.State.Status}}
	I0207 20:33:09.782467    4972 cli_runner.go:186] Completed: docker container inspect multinode-20220207201346-8704-m02 --format={{.State.Status}}: (1.1968393s)
	I0207 20:33:09.782525    4972 status.go:328] multinode-20220207201346-8704-m02 host status = "Stopped" (err=<nil>)
	I0207 20:33:09.782525    4972 status.go:341] host is not running, skipping remaining checks
	I0207 20:33:09.782525    4972 status.go:255] multinode-20220207201346-8704-m02 status: &{Name:multinode-20220207201346-8704-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (44.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (165.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:349: (dbg) Done: docker version -f {{.Server.Version}}: (1.1958871s)
multinode_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704 --wait=true -v=8 --alsologtostderr --driver=docker
E0207 20:33:14.638696    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 20:34:56.908666    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:34:58.442335    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
multinode_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704 --wait=true -v=8 --alsologtostderr --driver=docker: (2m33.0177114s)
multinode_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr
multinode_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220207201346-8704 status --alsologtostderr: (10.9181792s)
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (165.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (176.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220207201346-8704
multinode_test.go:457: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704-m02 --driver=docker
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704-m02 --driver=docker: exit status 14 (336.9323ms)

                                                
                                                
-- stdout --
	* [multinode-20220207201346-8704-m02] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220207201346-8704-m02' is duplicated with machine name 'multinode-20220207201346-8704-m02' in profile 'multinode-20220207201346-8704'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704-m03 --driver=docker
E0207 20:36:53.856098    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:38:14.639846    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
multinode_test.go:465: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220207201346-8704-m03 --driver=docker: (2m24.3596892s)
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220207201346-8704
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220207201346-8704: exit status 80 (5.8701909s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220207201346-8704
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220207201346-8704-m03 already exists in multinode-20220207201346-8704-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_node_2bbdfd0e0a46af455ae5a771b1270736051e61d9_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220207201346-8704-m03
multinode_test.go:477: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220207201346-8704-m03: (25.5392371s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (176.40s)

                                                
                                    
x
+
TestPreload (410.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220207203934-8704 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0207 20:39:58.443831    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 20:41:53.858420    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:42:57.694787    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 20:43:14.642042    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220207203934-8704 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (3m44.2016774s)
preload_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220207203934-8704 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220207203934-8704 -- docker pull gcr.io/k8s-minikube/busybox: (8.1630329s)
preload_test.go:72: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220207203934-8704 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0207 20:44:58.446038    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
preload_test.go:72: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220207203934-8704 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m24.999626s)
preload_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220207203934-8704 -- docker images
preload_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220207203934-8704 -- docker images: (6.8143686s)
helpers_test.go:176: Cleaning up "test-preload-20220207203934-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220207203934-8704
E0207 20:46:21.507607    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220207203934-8704: (26.2299349s)
--- PASS: TestPreload (410.41s)

                                                
                                    
x
+
TestScheduledStopWindows (248.71s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220207204624-8704 --memory=2048 --driver=docker
E0207 20:46:53.860876    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:48:14.643266    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20220207204624-8704 --memory=2048 --driver=docker: (2m16.9047779s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220207204624-8704 --schedule 5m
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220207204624-8704 --schedule 5m: (5.6400701s)
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220207204624-8704 -n scheduled-stop-20220207204624-8704
scheduled_stop_test.go:192: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220207204624-8704 -n scheduled-stop-20220207204624-8704: (7.5770238s)
scheduled_stop_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220207204624-8704 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220207204624-8704 -- sudo systemctl show minikube-scheduled-stop --no-page: (6.7139176s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220207204624-8704 --schedule 5s
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220207204624-8704 --schedule 5s: (4.9004193s)
E0207 20:49:58.447011    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20220207204624-8704
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20220207204624-8704: exit status 7 (2.7757772s)

                                                
                                                
-- stdout --
	scheduled-stop-20220207204624-8704
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220207204624-8704 -n scheduled-stop-20220207204624-8704
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220207204624-8704 -n scheduled-stop-20220207204624-8704: exit status 7 (2.8005944s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220207204624-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220207204624-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220207204624-8704: (21.3958597s)
--- PASS: TestScheduledStopWindows (248.71s)

                                                
                                    
x
+
TestSkaffold (237.03s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\skaffold.exe4191359855 version
skaffold_test.go:61: skaffold version: v1.35.2
skaffold_test.go:64: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20220207205033-8704 --memory=2600 --driver=docker
E0207 20:51:36.915069    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:51:53.861485    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
skaffold_test.go:64: (dbg) Done: out/minikube-windows-amd64.exe start -p skaffold-20220207205033-8704 --memory=2600 --driver=docker: (2m18.7766772s)
skaffold_test.go:84: copying out/minikube-windows-amd64.exe to C:\jenkins\workspace\Docker_Windows_integration\out\minikube.exe
skaffold_test.go:108: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\skaffold.exe4191359855 run --minikube-profile skaffold-20220207205033-8704 --kube-context skaffold-20220207205033-8704 --status-check=true --port-forward=false --interactive=false
E0207 20:53:14.644777    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
skaffold_test.go:108: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\skaffold.exe4191359855 run --minikube-profile skaffold-20220207205033-8704 --kube-context skaffold-20220207205033-8704 --status-check=true --port-forward=false --interactive=false: (1m2.1385036s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-788b4744f5-s5n4l" [8a099c10-38a2-4dbd-8d51-2ec374411e26] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.0420064s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-679c95c88b-hwq7k" [56d3c58e-1e46-415c-addf-051968f41cc9] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.0227504s
helpers_test.go:176: Cleaning up "skaffold-20220207205033-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20220207205033-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20220207205033-8704: (24.6922681s)
--- PASS: TestSkaffold (237.03s)

                                                
                                    
x
+
TestInsufficientStorage (137.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220207205430-8704 --memory=2048 --output=json --wait=true --driver=docker
E0207 20:54:58.448210    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
status_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220207205430-8704 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (1m42.943328s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"230953bc-efd6-4091-abff-db2e39956b63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220207205430-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"571daa1c-1d0c-45ac-b411-72e4a3807ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"721b4ae6-1f98-4cd0-8210-100889305754","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"3cbc2d0b-a0c6-4978-bee5-8cbc2ade220b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13439"}}
	{"specversion":"1.0","id":"e3fa7825-0bbb-4469-b52e-0a3db4a735bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e1d4235-c123-403e-a490-1c053c3aba62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6acbfb2b-75ed-4887-bc17-2c6515338e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a659a9f4-1885-4096-98d7-8e89868287f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220207205430-8704 in cluster insufficient-storage-20220207205430-8704","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7a95f2a-65c9-4b6b-9125-c82b51009a91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"60cf3ada-8b49-4986-a069-018504d9aa25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7de3907-5641-45ec-950d-cf8a40665a13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220207205430-8704 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220207205430-8704 --output=json --layout=cluster: exit status 7 (6.3838703s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220207205430-8704","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220207205430-8704","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0207 20:56:19.711681    4972 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220207205430-8704" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220207205430-8704 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220207205430-8704 --output=json --layout=cluster: exit status 7 (6.6168761s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220207205430-8704","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220207205430-8704","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0207 20:56:26.337677   11528 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220207205430-8704" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	E0207 20:56:26.385532   11528 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\insufficient-storage-20220207205430-8704\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220207205430-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220207205430-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220207205430-8704: (21.4086511s)
--- PASS: TestInsufficientStorage (137.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (427.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.0.4055563004.exe start -p running-upgrade-20220207211100-8704 --memory=2200 --vm-driver=docker
E0207 21:11:53.881079    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.0.4055563004.exe start -p running-upgrade-20220207211100-8704 --memory=2200 --vm-driver=docker: (4m33.4265715s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20220207211100-8704 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0207 21:16:17.708648    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 21:16:53.869740    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20220207211100-8704 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m48.257517s)
helpers_test.go:176: Cleaning up "running-upgrade-20220207211100-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220207211100-8704

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220207211100-8704: (44.7817963s)
--- PASS: TestRunningBinaryUpgrade (427.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (561.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
E0207 21:04:58.451136    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (2m45.143582s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220207210435-8704
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220207210435-8704: (23.4262096s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220207210435-8704 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220207210435-8704 status --format={{.Host}}: exit status 7 (3.3755328s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker
E0207 21:08:14.656398    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 21:08:16.925422    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 21:08:55.682399    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker: (4m42.4979393s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220207210435-8704 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (636.0536ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220207210435-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.4-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220207210435-8704
	    minikube start -p kubernetes-upgrade-20220207210435-8704 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220207210435-87042 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.4-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220207210435-8704 --kubernetes-version=v1.23.4-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker
E0207 21:13:14.651975    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220207210435-8704 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker: (46.6256626s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220207210435-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220207210435-8704

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220207210435-8704: (39.4915506s)
--- PASS: TestKubernetesUpgrade (561.71s)

                                                
                                    
x
+
TestMissingContainerUpgrade (730.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.1.922763116.exe start -p missing-upgrade-20220207210209-8704 --memory=2200 --driver=docker
E0207 21:03:01.514191    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 21:03:14.647479    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.1.922763116.exe start -p missing-upgrade-20220207210209-8704 --memory=2200 --driver=docker: (7m23.862811s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220207210209-8704

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220207210209-8704: (24.1677547s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220207210209-8704
version_upgrade_test.go:330: (dbg) Done: docker rm missing-upgrade-20220207210209-8704: (2.3898183s)
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20220207210209-8704 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20220207210209-8704 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m50.066894s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220207210209-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220207210209-8704
E0207 21:13:55.672264    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220207210209-8704: (29.1727563s)
--- PASS: TestMissingContainerUpgrade (730.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220207205647-8704 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220207205647-8704 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (411.3077ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220207205647-8704] minikube v1.25.1 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (220.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220207205647-8704 --driver=docker
E0207 20:56:53.863466    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 20:58:14.646565    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 20:58:55.667786    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:55.673749    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:55.683749    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:55.703769    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:55.744737    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:55.825734    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:55.986133    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:56.306504    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:56.947187    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:58:58.227477    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:59:00.788642    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:59:05.910068    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:59:16.151385    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:59:36.632449    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 20:59:37.701741    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 20:59:58.450373    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 21:00:17.593452    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220207205647-8704 --driver=docker: (3m30.8363024s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220207205647-8704 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-20220207205647-8704 status -o json: (10.0374054s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (220.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (468.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.0.556377348.exe start -p stopped-upgrade-20220207210133-8704 --memory=2200 --vm-driver=docker
E0207 21:01:39.517546    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.0.556377348.exe start -p stopped-upgrade-20220207210133-8704 --memory=2200 --vm-driver=docker: (5m12.4455714s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.0.556377348.exe -p stopped-upgrade-20220207210133-8704 stop
E0207 21:06:53.866118    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.9.0.556377348.exe -p stopped-upgrade-20220207210133-8704 stop: (26.544155s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20220207210133-8704 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20220207210133-8704 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m9.582037s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (468.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (24.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220207210133-8704

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220207210133-8704: (24.4605188s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (24.46s)

                                                
                                    
x
+
TestPause/serial/Start (206.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220207211356-8704 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220207211356-8704 --memory=2048 --install-addons=false --wait=all --driver=docker: (3m26.0551432s)
--- PASS: TestPause/serial/Start (206.06s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220207211356-8704 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220207211356-8704 --alsologtostderr -v=1 --driver=docker: (44.3113115s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.33s)

                                                
                                    
x
+
TestPause/serial/Pause (7.21s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220207211356-8704 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220207211356-8704 --alsologtostderr -v=5: (7.2074741s)
--- PASS: TestPause/serial/Pause (7.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (627.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p false-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (10m27.280376s)
--- PASS: TestNetworkPlugins/group/false/Start (627.28s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (7.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20220207211356-8704 --output=json --layout=cluster
E0207 21:18:14.652779    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20220207211356-8704 --output=json --layout=cluster: exit status 2 (7.534492s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220207211356-8704","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220207211356-8704","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (7.53s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.45s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20220207211356-8704 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Unpause
pause_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20220207211356-8704 --alsologtostderr -v=5: (7.4534494s)
--- PASS: TestPause/serial/Unpause (7.45s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220207211356-8704 --alsologtostderr -v=5
pause_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220207211356-8704 --alsologtostderr -v=5: (7.6407178s)
--- PASS: TestPause/serial/PauseAgain (7.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (56.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20220207211356-8704 --alsologtostderr -v=5
E0207 21:18:55.674270    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-20220207211356-8704 --alsologtostderr -v=5: (56.7832884s)
--- PASS: TestPause/serial/DeletePaused (56.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (245.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker
E0207 21:21:53.875304    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 21:23:14.654668    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 21:23:55.675485    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: (4m5.7544358s)
--- PASS: TestNetworkPlugins/group/calico/Start (245.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-rgfkg" [fd34efa3-6d9c-436a-859d-aabb842ce2e7] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.0498333s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (7.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-20220207210133-8704 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-20220207210133-8704 "pgrep -a kubelet": (7.4360063s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (7.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (25.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context calico-20220207210133-8704 replace --force -f testdata\netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context calico-20220207210133-8704 replace --force -f testdata\netcat-deployment.yaml: (1.5320123s)
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-28l8d" [e0071026-14d1-402f-bd8d-f449b02604c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-28l8d" [e0071026-14d1-402f-bd8d-f449b02604c1] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 23.0823525s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (25.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20220207210133-8704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:182: (dbg) Run:  kubectl --context calico-20220207210133-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:232: (dbg) Run:  kubectl --context calico-20220207210133-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (200.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker
E0207 21:28:14.656800    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-20220207210133-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: (3m20.6997802s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (200.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (7.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-20220207210133-8704 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-20220207210133-8704 "pgrep -a kubelet": (7.1857683s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (7.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (19.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220207210133-8704 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-5fjxh" [565ea085-94bd-4288-991d-7b4d7ce32691] Pending
helpers_test.go:343: "netcat-668db85669-5fjxh" [565ea085-94bd-4288-991d-7b4d7ce32691] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0207 21:28:55.676748    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
helpers_test.go:343: "netcat-668db85669-5fjxh" [565ea085-94bd-4288-991d-7b4d7ce32691] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 19.1044963s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (19.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220207210133-8704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20220207210133-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20220207210133-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20220207210133-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5569418s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (171.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E0207 21:29:54.481836    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:29:58.458996    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 21:30:35.444160    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (2m51.5947936s)
--- PASS: TestNetworkPlugins/group/bridge/Start (171.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-6m272" [9317008c-869a-42b9-8ac7-9f2532ea320e] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0380236s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (7.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-20220207210133-8704 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-20220207210133-8704 "pgrep -a kubelet": (7.4629073s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (7.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (20.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220207210133-8704 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-dmn44" [505d94f8-dd7c-41d3-a9d4-158c6199262d] Pending
helpers_test.go:343: "netcat-668db85669-dmn44" [505d94f8-dd7c-41d3-a9d4-158c6199262d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-dmn44" [505d94f8-dd7c-41d3-a9d4-158c6199262d] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.1068667s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (20.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220207210133-8704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kindnet-20220207210133-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kindnet-20220207210133-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (253.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-20220207210111-8704 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (4m13.2533694s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (253.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (7.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-20220207210111-8704 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-20220207210111-8704 "pgrep -a kubelet": (7.525286s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (7.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (28.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220207210111-8704 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-c2smg" [ef001b2d-4076-4805-a84e-c371b9f13ce5] Pending
helpers_test.go:343: "netcat-668db85669-c2smg" [ef001b2d-4076-4805-a84e-c371b9f13ce5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0207 21:32:57.715268    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
helpers_test.go:343: "netcat-668db85669-c2smg" [ef001b2d-4076-4805-a84e-c371b9f13ce5] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 28.081267s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (28.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207210111-8704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:182: (dbg) Run:  kubectl --context bridge-20220207210111-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0207 21:33:14.656641    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:232: (dbg) Run:  kubectl --context bridge-20220207210111-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (613.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220207213422-8704 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0207 21:34:23.672380    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220207213422-8704 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (10m13.545027s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (613.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (255.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220207213438-8704 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.4-rc.0
E0207 21:34:41.207927    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220207213438-8704 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.4-rc.0: (4m15.6299322s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (255.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (215.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220207213455-8704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.3
E0207 21:34:58.462468    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 21:35:04.637561    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:57.850072    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:57.855377    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:57.866412    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:57.886448    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:57.927382    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:58.007805    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:58.168868    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:58.488885    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:35:59.130028    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:36:00.410441    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:36:02.972543    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:36:08.092760    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220207213455-8704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.3: (3m35.6735687s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (215.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (8.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-20220207210111-8704 "pgrep -a kubelet"
E0207 21:36:18.333896    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-20220207210111-8704 "pgrep -a kubelet": (8.4684339s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (8.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (24.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220207210111-8704 replace --force -f testdata\netcat-deployment.yaml
E0207 21:36:21.526223    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-j7hn4" [d833852c-521d-46b4-97cb-3bc750058f05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0207 21:36:26.558605    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:36:38.815075    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
helpers_test.go:343: "netcat-668db85669-j7hn4" [d833852c-521d-46b4-97cb-3bc750058f05] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 23.063246s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (24.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207210111-8704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kubenet-20220207210111-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kubenet-20220207210111-8704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (185.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220207213739-8704 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.3
E0207 21:37:45.396263    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:45.401595    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:45.413211    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:45.433391    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:45.474493    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:45.555330    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:45.715521    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:46.036538    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:46.677770    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:47.958912    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:50.519598    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:37:55.640739    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:38:05.881209    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:38:14.658234    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 21:38:26.362780    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220207213739-8704 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.3: (3m5.1445835s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (185.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (18.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220207213455-8704 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [544c957d-75a8-45d1-be11-626097d65292] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0207 21:38:41.697906    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:38:42.706493    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.
helpers_test.go:343: "busybox" [544c957d-75a8-45d1-be11-626097d65292] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 17.0349841s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220207213455-8704 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (18.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (6.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220207213455-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220207213455-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.2023184s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220207213455-8704 describe deploy/metrics-server -n kube-system
E0207 21:38:55.681657    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (6.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220207213438-8704 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [706d19b4-9673-499e-8d3c-40968fa1064b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [706d19b4-9673-499e-8d3c-40968fa1064b] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0420174s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220207213438-8704 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (22.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220207213455-8704 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20220207213455-8704 --alsologtostderr -v=3: (22.0091392s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (22.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220207213438-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0207 21:39:07.323567    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:39:10.399693    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220207213438-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.7566443s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220207213438-8704 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (6.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220207213438-8704 --alsologtostderr -v=3
E0207 21:39:13.516293    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20220207213438-8704 --alsologtostderr -v=3: (20.4642719s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704: exit status 7 (3.2820608s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220207213455-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220207213455-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.1955502s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (442.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220207213455-8704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220207213455-8704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.3: (7m12.0669411s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704: (10.788852s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (442.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704: exit status 7 (2.9720739s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220207213438-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220207213438-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9893923s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (442.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220207213438-8704 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.4-rc.0
E0207 21:39:58.464581    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
E0207 21:40:29.244645    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220207213438-8704 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.4-rc.0: (7m12.8518544s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704: (9.8193556s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (442.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220207213739-8704 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [f0a3424b-c34b-4ad8-b094-100295a27ce7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [f0a3424b-c34b-4ad8-b094-100295a27ce7] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 11.0288162s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220207213739-8704 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220207213739-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0207 21:40:57.852622    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220207213739-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.0067015s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220207213739-8704 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (23.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220207213739-8704 --alsologtostderr -v=3
E0207 21:41:21.959218    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:21.965150    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:21.976925    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:21.998156    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:22.038696    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:22.120824    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:22.283036    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:22.604287    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:23.245708    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:24.527608    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:25.539776    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220207213739-8704 --alsologtostderr -v=3: (23.1126838s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (23.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704
E0207 21:41:27.088416    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704: exit status 7 (3.1443744s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220207213739-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0207 21:41:32.209049    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220207213739-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.2956913s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (455.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220207213739-8704 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.3
E0207 21:41:36.941679    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 21:41:42.450804    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:41:53.876188    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
E0207 21:42:02.933623    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:42:43.895337    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:42:45.399058    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:43:13.086484    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:43:14.660440    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
E0207 21:43:42.708783    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:43:55.682519    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\skaffold-20220207205033-8704\client.crt: The system cannot find the path specified.
E0207 21:44:05.816915    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:44:13.519452    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220207213739-8704 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.3: (7m26.0593407s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704: (9.6894362s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (455.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [c7830c07-e6de-4a14-997b-631f338fc23f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [c7830c07-e6de-4a14-997b-631f338fc23f] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0403012s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220207213422-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220207213422-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.6630688s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220207213422-8704 --alsologtostderr -v=3
E0207 21:44:58.464658    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220207213422-8704 --alsologtostderr -v=3: (21.0030626s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (21.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704: exit status 7 (3.0089433s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220207213422-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220207213422-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0732922s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (472.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220207213422-8704 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0207 21:45:36.573187    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:45:57.852717    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
E0207 21:46:21.960964    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220207213422-8704 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m45.3879484s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220207213422-8704 -n old-k8s-version-20220207213422-8704: (7.5176695s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (472.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-rgm6c" [0a97bf78-4b8f-4cda-83d9-29dc3e9132c8] Running
E0207 21:46:49.658307    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-20220207210111-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0611095s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-rgm6c" [0a97bf78-4b8f-4cda-83d9-29dc3e9132c8] Running
E0207 21:46:53.877762    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0943417s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220207213455-8704 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (8.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220207213455-8704 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20220207213455-8704 "sudo crictl images -o json": (8.0944505s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (8.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-4vnv2" [f1508305-9885-4e00-b1a9-2d6e3bcdc699] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-4vnv2" [f1508305-9885-4e00-b1a9-2d6e3bcdc699] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.0808664s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (45.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220207213455-8704 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-20220207213455-8704 --alsologtostderr -v=1: (7.7184084s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704: exit status 2 (7.3838657s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704: exit status 2 (7.5970201s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-20220207213455-8704 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-20220207213455-8704 --alsologtostderr -v=1: (7.4620216s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704: (7.8479079s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704
E0207 21:47:45.399907    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220207213455-8704 -n embed-certs-20220207213455-8704: (7.6505698s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (45.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-4vnv2" [f1508305-9885-4e00-b1a9-2d6e3bcdc699] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02177s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220207213438-8704 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220207213438-8704 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20220207213438-8704 "sudo crictl images -o json": (7.5876318s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (58.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220207213438-8704 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20220207213438-8704 --alsologtostderr -v=1: (7.2690658s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704: exit status 2 (7.6458546s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704: exit status 2 (7.6673741s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20220207213438-8704 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20220207213438-8704 --alsologtostderr -v=1: (8.4331706s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704
E0207 21:48:14.663228    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704: (18.8144705s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220207213438-8704 -n no-preload-20220207213438-8704: (8.4769429s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (58.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (171.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220207214858-8704 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220207214858-8704 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.4-rc.0: (2m51.271017s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (171.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-gdf9f" [94daef33-3c1b-4f7a-8485-c67c36e4558e] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1597605s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0207 21:49:13.520308    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-20220207210133-8704\client.crt: The system cannot find the path specified.
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-gdf9f" [94daef33-3c1b-4f7a-8485-c67c36e4558e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0793705s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220207213739-8704 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220207213739-8704 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220207213739-8704 "sudo crictl images -o json": (7.9792152s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (49.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220207213739-8704 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220207213739-8704 --alsologtostderr -v=1: (7.2959236s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704
E0207 21:49:37.721524    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704: exit status 2 (7.333592s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704: exit status 2 (7.3771496s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220207213739-8704 --alsologtostderr -v=1
E0207 21:49:58.465296    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220207213739-8704 --alsologtostderr -v=1: (12.1181413s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704
E0207 21:50:05.763736    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-20220207210133-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704: (7.6252005s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220207213739-8704 -n default-k8s-different-port-20220207213739-8704: (7.5932184s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (49.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220207214858-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0207 21:51:53.880264    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220207195307-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220207214858-8704 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.2439144s)
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (6.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (23.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220207214858-8704 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20220207214858-8704 --alsologtostderr -v=3: (23.5798099s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (23.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704
E0207 21:52:20.903918    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-20220207210133-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704: exit status 7 (3.0493018s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220207214858-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220207214858-8704 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9808238s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (84.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220207214858-8704 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.4-rc.0
E0207 21:52:45.400961    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.
E0207 21:53:01.532306    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-20220207192142-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220207214858-8704 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.4-rc.0: (1m15.7868356s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704: (8.2930852s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (84.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-27bjs" [0f32987c-8dd3-4bf4-ba53-474b8af3fc39] Running
E0207 21:53:14.664941    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-20220207194118-8704\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0389754s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-27bjs" [0f32987c-8dd3-4bf4-ba53-474b8af3fc39] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0183411s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220207213422-8704 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220207213422-8704 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220207213422-8704 "sudo crictl images -o json": (7.8922928s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220207214858-8704 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20220207214858-8704 "sudo crictl images -o json": (8.2073492s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (8.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (50.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220207214858-8704 --alsologtostderr -v=1
E0207 21:54:00.026909    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-20220207214858-8704 --alsologtostderr -v=1: (9.2129273s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704
E0207 21:54:08.452129    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-20220207210111-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704: exit status 2 (7.5834204s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704
E0207 21:54:15.389439    8704 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-20220207213438-8704\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704: exit status 2 (7.5067142s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-20220207214858-8704 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-20220207214858-8704 --alsologtostderr -v=1: (7.5981672s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704: (9.6715661s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220207214858-8704 -n newest-cni-20220207214858-8704: (8.82198s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (50.40s)

                                                
                                    

Test skip (25/273)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (26.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 43.894ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-fcmht" [d55bf42e-dc40-4769-95bf-c129acb49503] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0483216s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-k42vl" [f1e425ea-9c0c-4b22-8fab-63b7c4581e16] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.1827856s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220207192142-8704 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220207192142-8704 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220207192142-8704 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (15.2909529s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (26.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (30.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220207192142-8704 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220207192142-8704 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:183: (dbg) Done: kubectl --context addons-20220207192142-8704 replace --force -f testdata\nginx-ingress-v1.yaml: (3.2978508s)
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220207192142-8704 replace --force -f testdata\nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:196: (dbg) Done: kubectl --context addons-20220207192142-8704 replace --force -f testdata\nginx-pod-svc.yaml: (1.6098558s)
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [53a790b9-5fcb-42de-9ac6-72c5578a8610] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [53a790b9-5fcb-42de-9ac6-72c5578a8610] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [53a790b9-5fcb-42de-9ac6-72c5578a8610] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.2609397s
addons_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220207192142-8704 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220207192142-8704 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (7.9083124s)
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (30.62s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220207194118-8704 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:917: output didn't produce a URL
functional_test.go:911: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220207194118-8704 --alsologtostderr -v=1] ...
helpers_test.go:489: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:58: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (30.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-20220207194118-8704 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-20220207194118-8704 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-nqfj9" [a94156e3-6aab-4f10-9188-fb6a4d4898c9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-nqfj9" [a94156e3-6aab-4f10-9188-fb6a4d4898c9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 22.0326713s
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220207194118-8704 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220207194118-8704 service list: (7.9152363s)
functional_test.go:1464: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (30.58s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:194: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (46.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220207195307-8704 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220207195307-8704 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.7667359s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220207195307-8704 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-20220207195307-8704 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.4104428s)
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220207195307-8704 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:196: (dbg) Done: kubectl --context ingress-addon-legacy-20220207195307-8704 replace --force -f testdata\nginx-pod-svc.yaml: (1.102351s)
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [74927c8f-a3fa-4feb-9dde-8a66c44caaac] Pending
helpers_test.go:343: "nginx" [74927c8f-a3fa-4feb-9dde-8a66c44caaac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [74927c8f-a3fa-4feb-9dde-8a66c44caaac] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 30.1662468s
addons_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220207195307-8704 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220207195307-8704 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.7008993s)
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (46.29s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:77: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (22.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220207210111-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220207210111-8704

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220207210111-8704: (22.3192358s)
--- SKIP: TestNetworkPlugins/group/flannel (22.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (14.31s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220207213724-8704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220207213724-8704
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220207213724-8704: (14.3094991s)
--- SKIP: TestStartStop/group/disable-driver-mounts (14.31s)

                                                
                                    
Copied to clipboard