Test Report: Docker_macOS 13251

                    
                      cce8d1911280cbcb62c9a9805b43d62c56136aef:2022-02-02:22517
                    
                

Test fail (7/227)

Order failed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0.19
33 TestAddons/parallel/MetricsServer 328.47
271 TestNetworkPlugins/group/calico/Start 554.7
284 TestNetworkPlugins/group/enable-default-cni/DNS 326.94
285 TestNetworkPlugins/group/kindnet/Start 310.02
289 TestNetworkPlugins/group/bridge/DNS 367.67
290 TestNetworkPlugins/group/kubenet/Start 7200.592
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:109: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (328.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 2.569743ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-6b76bd68b6-k8qtb" [69e334f4-7756-4177-8e4b-5e16226362d5] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008088797s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (64.715866ms)

                                                
                                                
** stderr ** 
	W0202 16:17:11.493460   76878 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m9.49345s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m9.49345s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (65.807155ms)

                                                
                                                
** stderr ** 
	W0202 16:17:13.419748   76879 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m11.419739s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m11.419739s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (67.704137ms)

                                                
                                                
** stderr ** 
	W0202 16:17:20.134218   76880 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m18.134209s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m18.134209s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (63.32709ms)

                                                
                                                
** stderr ** 
	W0202 16:17:29.658672   76895 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m27.658663s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m27.658663s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (63.434316ms)

                                                
                                                
** stderr ** 
	W0202 16:17:40.798835   76910 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m38.798824s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m38.798824s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (62.366926ms)

                                                
                                                
** stderr ** 
	W0202 16:17:56.447226   76928 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m54.447215s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 2m54.447215s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (62.535586ms)

                                                
                                                
** stderr ** 
	W0202 16:18:21.564048   76933 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 3m19.564039s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 3m19.564039s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (60.778744ms)

                                                
                                                
** stderr ** 
	W0202 16:18:47.789234   76938 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 3m45.789225s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 3m45.789225s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (62.883619ms)

                                                
                                                
** stderr ** 
	W0202 16:19:18.722205   76943 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 4m16.722195s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 4m16.722195s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (62.617029ms)

                                                
                                                
** stderr ** 
	W0202 16:20:33.209168   76951 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 5m31.209159s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 5m31.209159s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (61.73964ms)

                                                
                                                
** stderr ** 
	W0202 16:21:33.878049   76961 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 6m31.878039s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 6m31.878039s

                                                
                                                
** /stderr **
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202161336-76172 top pods -n kube-system
addons_test.go:366: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 top pods -n kube-system: exit status 1 (61.703492ms)

                                                
                                                
** stderr ** 
	W0202 16:22:30.061630   76967 top_pod.go:265] Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 7m28.06162s
	error: Metrics not available for pod kube-system/coredns-64897985d-g5hn9, age: 7m28.06162s

                                                
                                                
** /stderr **
addons_test.go:380: failed checking metric server: exit status 1
addons_test.go:383: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220202161336-76172 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20220202161336-76172
helpers_test.go:236: (dbg) docker inspect addons-20220202161336-76172:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c41ba2e843b841a44bdd196a7fb6a0789fe3dcc1973bd8964d8a9ca65e8c7b7f",
	        "Created": "2022-02-03T00:13:48.850715465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-03T00:13:55.801728917Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/c41ba2e843b841a44bdd196a7fb6a0789fe3dcc1973bd8964d8a9ca65e8c7b7f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c41ba2e843b841a44bdd196a7fb6a0789fe3dcc1973bd8964d8a9ca65e8c7b7f/hostname",
	        "HostsPath": "/var/lib/docker/containers/c41ba2e843b841a44bdd196a7fb6a0789fe3dcc1973bd8964d8a9ca65e8c7b7f/hosts",
	        "LogPath": "/var/lib/docker/containers/c41ba2e843b841a44bdd196a7fb6a0789fe3dcc1973bd8964d8a9ca65e8c7b7f/c41ba2e843b841a44bdd196a7fb6a0789fe3dcc1973bd8964d8a9ca65e8c7b7f-json.log",
	        "Name": "/addons-20220202161336-76172",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20220202161336-76172:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20220202161336-76172",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10383d6ca821c8276a700b729a5035bc5890f3b13cd0eb6f6a8c5319d201bcd3-init/diff:/var/lib/docker/overlay2/2a7e6cc2001e11fee4ebe50e26987d2294311d4c6e5cba4860e9cc3aa8c775f1/diff:/var/lib/docker/overlay2/75225079dcac1f7a5606093a81a0a8c373eb4da3d65cd90ddbcfb69d2624fe87/diff:/var/lib/docker/overlay2/e102d91ef30a8f3119bda2eca1ea56fa89f80d6bd06428c2d337ffd442f31e39/diff:/var/lib/docker/overlay2/2e906d2d6d22daf943a0aea5eceeb3554194635958e3c99ebafc987a6a3773c6/diff:/var/lib/docker/overlay2/ea570dd14e59999ac24760ec8128afc732d7e03000b0c846ff57f36063ea4857/diff:/var/lib/docker/overlay2/52f4d1be8ed49d3c3e4aa65645805bdeaefff9436d3a0be005ee0c01f22d6524/diff:/var/lib/docker/overlay2/4fab0356adacc3534f74fba3a295734d4364ed062cbb008da2cf4b6b7d0a93fa/diff:/var/lib/docker/overlay2/0df261bf0a8b8293f161caa2233324aa12c0c15b0095ec5b9ec30c4d8c0f1289/diff:/var/lib/docker/overlay2/9701cf193b3398acd0181490ea777089d7e3fbf7a4a0a2d0133554ca86995760/diff:/var/lib/docker/overlay2/b88365
0e947c28d0964c4da2c40a091dc8123e93bc57eed9f0a851e47c941aac/diff:/var/lib/docker/overlay2/7032585a99df9629540836d964bf1e9b2eebec0f02316aac93b747b173e0cad8/diff:/var/lib/docker/overlay2/62b91bb57a81a34f97d5f6ffd83241a912943cde283c183d9a07f55a92672949/diff:/var/lib/docker/overlay2/369d3bd409332d53570e4ec75c6c2ba47891be255c8ece7b9202131cd36b4404/diff:/var/lib/docker/overlay2/ed18852bc2469c676a9ed0481adf136a8d353167b3a7f52bfee4d79935c26139/diff:/var/lib/docker/overlay2/5bb2ee64dcdfe2728f75773490009b95fb9b909d064636feaf8075bbd13c85c9/diff:/var/lib/docker/overlay2/ef6ff5c7032fb5767e31428900ce994de894cd60272e9012de50ff2d7d38be0f/diff:/var/lib/docker/overlay2/33e161d7d38d725bad8809038472bc0ccdfe09cd124895bcad2a8f5f615b4de4/diff:/var/lib/docker/overlay2/95c5592d76807e381c893b4e3faf91eb98f0b89f3d8e812e1602b3fbd6282eba/diff:/var/lib/docker/overlay2/bbfc969d501deeffb78f7b6e93d2c0d17ddad78d9d1d27eaa4ada4e2dedfc37e/diff:/var/lib/docker/overlay2/31e96d0246e99ddd4d5b90503679b75ecf7b098c124c028b187600eb4d938dd8/diff:/var/lib/d
ocker/overlay2/505d8b9cc5c8969dbce6fdf7cacddef94aa6609dffec10b704cdb6e69d6ce0e5/diff:/var/lib/docker/overlay2/411cfa777875b03e8c4ef0055bbb11dcffc8fea260819c75820efae78008687e/diff:/var/lib/docker/overlay2/216d5777c7f285f0744036a8e586e1ee61af673b4321fb8b088a0e8ebbfe819e/diff:/var/lib/docker/overlay2/a71aeff8d8919ccf39732643ee63d3083de635457c6382fbc8a3e84276c103ad/diff:/var/lib/docker/overlay2/ead8709d3fc0c08d0eac96bbdfe00216ec12c8403a39ea52b3e69288755d8d73/diff:/var/lib/docker/overlay2/3711201ea0f5fa1be41d4795c382348b51f31ee54b9d604593f80b3ee34d31fd/diff:/var/lib/docker/overlay2/75c12bc72fde0bb98e5a21f6648be245126e8507276c4725c9e55305fb3d9217/diff:/var/lib/docker/overlay2/92c133d0073dcdaf629d9697bdac9cf84fceb9554b98cdf17c0887c87ef2be89/diff:/var/lib/docker/overlay2/c067d51d62eb76562b4043fbd618dbf87f61fc61d77e3024f092098dfea90387/diff:/var/lib/docker/overlay2/c23441a7699cd1f6eda9af3296a891160f06bd8d2e9537464e4ae430e516bd99/diff:/var/lib/docker/overlay2/7c99ba0f262e34adde8bc1b90f245985daaa48e03f83b956e7984ae4cb1
c5647/diff:/var/lib/docker/overlay2/ae6fd8924a1817492c2e12b25efff0f71e29bae42ec5f17f20f441b40f2db1f1/diff:/var/lib/docker/overlay2/3d465d35153dc134daba19a1d4b244a518037a2e024f84fbbc42e3c450cf8e94/diff:/var/lib/docker/overlay2/7258fe9f4b6805c2ee0ae748e188ffea153a1f1b9ce4fa950f9dbb124aed6580/diff:/var/lib/docker/overlay2/229e9099d6c560afa616010365435d1fe1cc6f000768b8e966fc3f924ba7c604/diff:/var/lib/docker/overlay2/ddc9dba6629b973d3038b7a422482fc243bae154322939ebaa77a75368dcfa08/diff:/var/lib/docker/overlay2/45e8baed0cb609a322bc42eb20a2d4afaa91f06f1affcbba332bee6f8714c6bc/diff:/var/lib/docker/overlay2/73346911f8c5f88f14bab74e68043fce4b3e7736b0a333b5ae34d44343013ae4/diff:/var/lib/docker/overlay2/12b12379bbfe5dda93638d4a87b9257deeb1643be4ffddf5551a8b41e1b41a7f/diff:/var/lib/docker/overlay2/ea6e8b819a378644e70f2185a7b51db37f8c0d24f8b6648ad388d74f08c2c510/diff:/var/lib/docker/overlay2/7b3462df9d94fb751b121776d729f78d0bc8acd4e3dd1bf143ddef20ed8733d1/diff:/var/lib/docker/overlay2/1ac60e0a0574910c05a20192d5988606665d31
01ced6bdbc31b7660cd8431283/diff:/var/lib/docker/overlay2/c6cdf5fd609878026154660951e80c9c6bc61a49cd2d889fbdccea6c8c36d474/diff:/var/lib/docker/overlay2/46c08365e5d94e0fcaca61e53b0d880b1b42b9c1387136f352318dca068deef3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10383d6ca821c8276a700b729a5035bc5890f3b13cd0eb6f6a8c5319d201bcd3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10383d6ca821c8276a700b729a5035bc5890f3b13cd0eb6f6a8c5319d201bcd3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10383d6ca821c8276a700b729a5035bc5890f3b13cd0eb6f6a8c5319d201bcd3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20220202161336-76172",
	                "Source": "/var/lib/docker/volumes/addons-20220202161336-76172/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20220202161336-76172",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20220202161336-76172",
	                "name.minikube.sigs.k8s.io": "addons-20220202161336-76172",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43898e24f13bc2d89bdf8a9682eeb7bb8a9620a74ba146e1ec8077d4ac4012ab",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55527"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55528"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55524"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55525"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55526"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/43898e24f13b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20220202161336-76172": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c41ba2e843b8",
	                        "addons-20220202161336-76172"
	                    ],
	                    "NetworkID": "03f2f21d56d2bff202251d1919075836864196200ec28def3d56d2b064a1e73d",
	                    "EndpointID": "5b518967d208e5ad46380e36702c9a2fb46020558fe90ee794da6a548e143c02",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-20220202161336-76172 -n addons-20220202161336-76172
helpers_test.go:245: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220202161336-76172 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220202161336-76172 logs -n 25: (2.540215809s)
helpers_test.go:253: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|--------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                 |               Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------|--------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                | download-only-20220202161228-76172   | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:13:22 PST | Wed, 02 Feb 2022 16:13:23 PST |
	| delete  | -p                                   | download-only-20220202161228-76172   | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:13:23 PST | Wed, 02 Feb 2022 16:13:24 PST |
	|         | download-only-20220202161228-76172   |                                      |         |         |                               |                               |
	| delete  | -p                                   | download-only-20220202161228-76172   | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:13:24 PST | Wed, 02 Feb 2022 16:13:25 PST |
	|         | download-only-20220202161228-76172   |                                      |         |         |                               |                               |
	| delete  | -p                                   | download-docker-20220202161325-76172 | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:13:33 PST | Wed, 02 Feb 2022 16:13:34 PST |
	|         | download-docker-20220202161325-76172 |                                      |         |         |                               |                               |
	| delete  | -p                                   | binary-mirror-20220202161334-76172   | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:13:35 PST | Wed, 02 Feb 2022 16:13:36 PST |
	|         | binary-mirror-20220202161334-76172   |                                      |         |         |                               |                               |
	| start   | -p addons-20220202161336-76172       | addons-20220202161336-76172          | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:13:36 PST | Wed, 02 Feb 2022 16:16:41 PST |
	|         | --wait=true --memory=4000            |                                      |         |         |                               |                               |
	|         | --alsologtostderr                    |                                      |         |         |                               |                               |
	|         | --addons=registry                    |                                      |         |         |                               |                               |
	|         | --addons=metrics-server              |                                      |         |         |                               |                               |
	|         | --addons=olm                         |                                      |         |         |                               |                               |
	|         | --addons=volumesnapshots             |                                      |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver         |                                      |         |         |                               |                               |
	|         | --addons=gcp-auth                    |                                      |         |         |                               |                               |
	|         | --driver=docker                      |                                      |         |         |                               |                               |
	|         | --addons=ingress                     |                                      |         |         |                               |                               |
	|         | --addons=ingress-dns                 |                                      |         |         |                               |                               |
	|         | --addons=helm-tiller                 |                                      |         |         |                               |                               |
	| -p      | addons-20220202161336-76172          | addons-20220202161336-76172          | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:17:05 PST | Wed, 02 Feb 2022 16:17:06 PST |
	|         | addons disable helm-tiller           |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20220202161336-76172          | addons-20220202161336-76172          | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:17:23 PST | Wed, 02 Feb 2022 16:17:30 PST |
	|         | addons disable                       |                                      |         |         |                               |                               |
	|         | csi-hostpath-driver                  |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20220202161336-76172          | addons-20220202161336-76172          | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:17:30 PST | Wed, 02 Feb 2022 16:17:30 PST |
	|         | addons disable volumesnapshots       |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20220202161336-76172          | addons-20220202161336-76172          | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:17:41 PST | Wed, 02 Feb 2022 16:17:41 PST |
	|         | ssh curl -s http://127.0.0.1/        |                                      |         |         |                               |                               |
	|         | -H 'Host: nginx.example.com'         |                                      |         |         |                               |                               |
	| -p      | addons-20220202161336-76172          | addons-20220202161336-76172          | jenkins | v1.25.1 | Wed, 02 Feb 2022 16:22:30 PST | Wed, 02 Feb 2022 16:22:30 PST |
	|         | addons disable metrics-server        |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	|---------|--------------------------------------|--------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 16:13:36
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0202 16:13:36.125157   76521 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:13:36.125296   76521 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:13:36.125302   76521 out.go:310] Setting ErrFile to fd 2...
	I0202 16:13:36.125306   76521 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:13:36.125379   76521 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 16:13:36.125717   76521 out.go:304] Setting JSON to false
	I0202 16:13:36.150803   76521 start.go:112] hostinfo: {"hostname":"37309.local","uptime":27791,"bootTime":1643819425,"procs":359,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 16:13:36.150903   76521 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0202 16:13:36.177909   76521 out.go:176] * [addons-20220202161336-76172] minikube v1.25.1 on Darwin 11.2.3
	I0202 16:13:36.178103   76521 notify.go:174] Checking for updates...
	I0202 16:13:36.225750   76521 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 16:13:36.251639   76521 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 16:13:36.277619   76521 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0202 16:13:36.303528   76521 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 16:13:36.329329   76521 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 16:13:36.329560   76521 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 16:13:36.424668   76521 docker.go:132] docker version: linux-20.10.6
	I0202 16:13:36.424813   76521 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:13:36.602304   76521 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-02-03 00:13:36.548061156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:13:36.629709   76521 out.go:176] * Using the docker driver based on user configuration
	I0202 16:13:36.629760   76521 start.go:281] selected driver: docker
	I0202 16:13:36.629771   76521 start.go:798] validating driver "docker" against <nil>
	I0202 16:13:36.629812   76521 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0202 16:13:36.633130   76521 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:13:36.809271   76521 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-02-03 00:13:36.755378417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:13:36.809391   76521 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0202 16:13:36.809503   76521 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0202 16:13:36.809539   76521 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0202 16:13:36.809556   76521 cni.go:93] Creating CNI manager for ""
	I0202 16:13:36.809564   76521 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 16:13:36.809569   76521 start_flags.go:302] config:
	{Name:addons-20220202161336-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:addons-20220202161336-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:13:36.836411   76521 out.go:176] * Starting control plane node addons-20220202161336-76172 in cluster addons-20220202161336-76172
	I0202 16:13:36.836464   76521 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 16:13:36.862312   76521 out.go:176] * Pulling base image ...
	I0202 16:13:36.862375   76521 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 16:13:36.862457   76521 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 16:13:36.862535   76521 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 16:13:36.862568   76521 cache.go:57] Caching tarball of preloaded images
	I0202 16:13:36.863842   76521 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0202 16:13:36.863952   76521 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0202 16:13:36.865332   76521 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/config.json ...
	I0202 16:13:36.865377   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/config.json: {Name:mka3a2f2184d152acf5b1557ea0e30ce069175ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:13:36.975096   76521 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 16:13:36.975131   76521 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 16:13:36.975140   76521 cache.go:208] Successfully downloaded all kic artifacts
	I0202 16:13:36.975190   76521 start.go:313] acquiring machines lock for addons-20220202161336-76172: {Name:mk76dad09d2a633459c6adc36496d6d342943d7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:13:36.975343   76521 start.go:317] acquired machines lock for "addons-20220202161336-76172" in 142.136µs
	I0202 16:13:36.975372   76521 start.go:89] Provisioning new machine with config: &{Name:addons-20220202161336-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:addons-20220202161336-76172 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 16:13:36.975464   76521 start.go:126] createHost starting for "" (driver="docker")
	I0202 16:13:37.023926   76521 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0202 16:13:37.024204   76521 start.go:160] libmachine.API.Create for "addons-20220202161336-76172" (driver="docker")
	I0202 16:13:37.024236   76521 client.go:168] LocalClient.Create starting
	I0202 16:13:37.024436   76521 main.go:130] libmachine: Creating CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem
	I0202 16:13:37.107647   76521 main.go:130] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem
	I0202 16:13:37.511231   76521 cli_runner.go:133] Run: docker network inspect addons-20220202161336-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0202 16:13:37.618295   76521 cli_runner.go:180] docker network inspect addons-20220202161336-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0202 16:13:37.618407   76521 network_create.go:254] running [docker network inspect addons-20220202161336-76172] to gather additional debugging logs...
	I0202 16:13:37.618430   76521 cli_runner.go:133] Run: docker network inspect addons-20220202161336-76172
	W0202 16:13:37.727409   76521 cli_runner.go:180] docker network inspect addons-20220202161336-76172 returned with exit code 1
	I0202 16:13:37.727434   76521 network_create.go:257] error running [docker network inspect addons-20220202161336-76172]: docker network inspect addons-20220202161336-76172: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220202161336-76172
	I0202 16:13:37.727453   76521 network_create.go:259] output of [docker network inspect addons-20220202161336-76172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220202161336-76172
	
	** /stderr **
	I0202 16:13:37.727559   76521 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0202 16:13:37.837513   76521 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00061a238] misses:0}
	I0202 16:13:37.837560   76521 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 16:13:37.837584   76521 network_create.go:106] attempt to create docker network addons-20220202161336-76172 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0202 16:13:37.837680   76521 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220202161336-76172
	I0202 16:13:41.815839   76521 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220202161336-76172: (3.978060429s)
	I0202 16:13:41.815862   76521 network_create.go:90] docker network addons-20220202161336-76172 192.168.49.0/24 created
	I0202 16:13:41.815880   76521 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20220202161336-76172" container
	I0202 16:13:41.815996   76521 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0202 16:13:41.926700   76521 cli_runner.go:133] Run: docker volume create addons-20220202161336-76172 --label name.minikube.sigs.k8s.io=addons-20220202161336-76172 --label created_by.minikube.sigs.k8s.io=true
	I0202 16:13:42.039018   76521 oci.go:102] Successfully created a docker volume addons-20220202161336-76172
	I0202 16:13:42.039207   76521 cli_runner.go:133] Run: docker run --rm --name addons-20220202161336-76172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220202161336-76172 --entrypoint /usr/bin/test -v addons-20220202161336-76172:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0202 16:13:42.745608   76521 oci.go:106] Successfully prepared a docker volume addons-20220202161336-76172
	I0202 16:13:42.745659   76521 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 16:13:42.745670   76521 kic.go:179] Starting extracting preloaded images to volume ...
	I0202 16:13:42.745809   76521 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220202161336-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0202 16:13:48.543314   76521 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220202161336-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (5.797349512s)
	I0202 16:13:48.543341   76521 kic.go:188] duration metric: took 5.797591 seconds to extract preloaded images to volume
	I0202 16:13:48.543469   76521 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0202 16:13:48.726321   76521 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20220202161336-76172 --name addons-20220202161336-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220202161336-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20220202161336-76172 --network addons-20220202161336-76172 --ip 192.168.49.2 --volume addons-20220202161336-76172:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0202 16:13:55.802770   76521 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20220202161336-76172 --name addons-20220202161336-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220202161336-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20220202161336-76172 --network addons-20220202161336-76172 --ip 192.168.49.2 --volume addons-20220202161336-76172:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (7.076250761s)
	I0202 16:13:55.802927   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Running}}
	I0202 16:13:55.924547   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:13:56.037853   76521 cli_runner.go:133] Run: docker exec addons-20220202161336-76172 stat /var/lib/dpkg/alternatives/iptables
	I0202 16:13:56.230949   76521 oci.go:281] the created container "addons-20220202161336-76172" has a running status.
	I0202 16:13:56.230979   76521 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa...
	I0202 16:13:56.296124   76521 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0202 16:13:56.460194   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:13:56.575588   76521 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0202 16:13:56.575611   76521 kic_runner.go:114] Args: [docker exec --privileged addons-20220202161336-76172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0202 16:13:56.759563   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:13:56.873683   76521 machine.go:88] provisioning docker machine ...
	I0202 16:13:56.873724   76521 ubuntu.go:169] provisioning hostname "addons-20220202161336-76172"
	I0202 16:13:56.873842   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:13:56.983306   76521 main.go:130] libmachine: Using SSH client type: native
	I0202 16:13:56.983496   76521 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55527 <nil> <nil>}
	I0202 16:13:56.983506   76521 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20220202161336-76172 && echo "addons-20220202161336-76172" | sudo tee /etc/hostname
	I0202 16:13:56.984927   76521 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0202 16:14:00.142228   76521 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20220202161336-76172
	
	I0202 16:14:00.142332   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:00.251788   76521 main.go:130] libmachine: Using SSH client type: native
	I0202 16:14:00.251963   76521 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55527 <nil> <nil>}
	I0202 16:14:00.251978   76521 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20220202161336-76172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20220202161336-76172/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20220202161336-76172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0202 16:14:00.386322   76521 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0202 16:14:00.386343   76521 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
	I0202 16:14:00.386361   76521 ubuntu.go:177] setting up certificates
	I0202 16:14:00.386369   76521 provision.go:83] configureAuth start
	I0202 16:14:00.386459   76521 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220202161336-76172
	I0202 16:14:00.498834   76521 provision.go:138] copyHostCerts
	I0202 16:14:00.498957   76521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
	I0202 16:14:00.499168   76521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
	I0202 16:14:00.499336   76521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1679 bytes)
	I0202 16:14:00.499464   76521 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.addons-20220202161336-76172 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20220202161336-76172]
	I0202 16:14:00.590392   76521 provision.go:172] copyRemoteCerts
	I0202 16:14:00.590452   76521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0202 16:14:00.590521   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:00.700819   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:14:00.793594   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0202 16:14:00.811797   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0202 16:14:00.829910   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0202 16:14:00.845891   76521 provision.go:86] duration metric: configureAuth took 459.501011ms
	I0202 16:14:00.845908   76521 ubuntu.go:193] setting minikube options for container-runtime
	I0202 16:14:00.846050   76521 config.go:176] Loaded profile config "addons-20220202161336-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 16:14:00.846118   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:00.957530   76521 main.go:130] libmachine: Using SSH client type: native
	I0202 16:14:00.957689   76521 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55527 <nil> <nil>}
	I0202 16:14:00.957700   76521 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0202 16:14:01.092868   76521 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0202 16:14:01.092883   76521 ubuntu.go:71] root file system type: overlay
	I0202 16:14:01.093000   76521 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0202 16:14:01.118187   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:01.229563   76521 main.go:130] libmachine: Using SSH client type: native
	I0202 16:14:01.229768   76521 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55527 <nil> <nil>}
	I0202 16:14:01.229824   76521 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0202 16:14:01.369775   76521 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0202 16:14:01.369895   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:01.483102   76521 main.go:130] libmachine: Using SSH client type: native
	I0202 16:14:01.483254   76521 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55527 <nil> <nil>}
	I0202 16:14:01.483266   76521 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0202 16:14:27.529292   76521 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-03 00:14:01.379862067 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0202 16:14:27.529313   76521 machine.go:91] provisioned docker machine in 30.655198151s
	I0202 16:14:27.529322   76521 client.go:171] LocalClient.Create took 50.50440156s
	I0202 16:14:27.529346   76521 start.go:168] duration metric: libmachine.API.Create for "addons-20220202161336-76172" took 50.504458716s
	I0202 16:14:27.529361   76521 start.go:267] post-start starting for "addons-20220202161336-76172" (driver="docker")
	I0202 16:14:27.529366   76521 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0202 16:14:27.529470   76521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0202 16:14:27.529565   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:27.639045   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:14:27.734164   76521 ssh_runner.go:195] Run: cat /etc/os-release
	I0202 16:14:27.737555   76521 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0202 16:14:27.737570   76521 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0202 16:14:27.737576   76521 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0202 16:14:27.737582   76521 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0202 16:14:27.737591   76521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
	I0202 16:14:27.737691   76521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
	I0202 16:14:27.737751   76521 start.go:270] post-start completed in 208.381188ms
	I0202 16:14:27.738216   76521 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220202161336-76172
	I0202 16:14:27.846342   76521 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/config.json ...
	I0202 16:14:27.846748   76521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 16:14:27.846817   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:27.956875   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:14:28.048534   76521 start.go:129] duration metric: createHost completed in 51.07237696s
	I0202 16:14:28.048554   76521 start.go:80] releasing machines lock for "addons-20220202161336-76172", held for 51.072516559s
	I0202 16:14:28.048674   76521 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220202161336-76172
	I0202 16:14:28.158370   76521 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0202 16:14:28.158383   76521 ssh_runner.go:195] Run: systemctl --version
	I0202 16:14:28.158462   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:28.158463   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:28.277346   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:14:28.277564   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:14:28.897211   76521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0202 16:14:28.906242   76521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 16:14:28.915282   76521 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0202 16:14:28.915342   76521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0202 16:14:28.925028   76521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0202 16:14:28.937235   76521 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0202 16:14:28.997016   76521 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0202 16:14:29.056650   76521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 16:14:29.067102   76521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0202 16:14:29.121957   76521 ssh_runner.go:195] Run: sudo systemctl start docker
	I0202 16:14:29.131249   76521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 16:14:29.273495   76521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 16:14:29.356768   76521 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0202 16:14:29.356935   76521 cli_runner.go:133] Run: docker exec -t addons-20220202161336-76172 dig +short host.docker.internal
	I0202 16:14:29.559711   76521 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0202 16:14:29.559821   76521 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0202 16:14:29.564595   76521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 16:14:29.576220   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:14:29.713173   76521 out.go:176]   - kubelet.housekeeping-interval=5m
	I0202 16:14:29.713277   76521 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 16:14:29.713383   76521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 16:14:29.743053   76521 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 16:14:29.743065   76521 docker.go:537] Images already preloaded, skipping extraction
	I0202 16:14:29.743186   76521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 16:14:29.773966   76521 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 16:14:29.773985   76521 cache_images.go:84] Images are preloaded, skipping loading
	I0202 16:14:29.774085   76521 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0202 16:14:30.057615   76521 cni.go:93] Creating CNI manager for ""
	I0202 16:14:30.057633   76521 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 16:14:30.057646   76521 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0202 16:14:30.057664   76521 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20220202161336-76172 NodeName:addons-20220202161336-76172 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0202 16:14:30.057796   76521 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20220202161336-76172"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0202 16:14:30.057897   76521 kubeadm.go:931] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20220202161336-76172 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:addons-20220202161336-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0202 16:14:30.057966   76521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0202 16:14:30.066044   76521 binaries.go:44] Found k8s binaries, skipping transfer
	I0202 16:14:30.066129   76521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0202 16:14:30.073004   76521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0202 16:14:30.084667   76521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0202 16:14:30.096610   76521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0202 16:14:30.108319   76521 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0202 16:14:30.112579   76521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 16:14:30.121759   76521 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172 for IP: 192.168.49.2
	I0202 16:14:30.121807   76521 certs.go:187] generating minikubeCA CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
	I0202 16:14:30.372025   76521 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt ...
	I0202 16:14:30.372040   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt: {Name:mk439312ceeaef2826f840c1b004b704bfe30599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.372349   76521 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key ...
	I0202 16:14:30.372358   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key: {Name:mk35c2059cac22a42c5d47dd90daa73a40347e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.372582   76521 certs.go:187] generating proxyClientCA CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
	I0202 16:14:30.491502   76521 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt ...
	I0202 16:14:30.491512   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt: {Name:mkb636c4558bfe317945de5919e7e19ac5b032c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.491736   76521 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key ...
	I0202 16:14:30.491743   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key: {Name:mkb271e1b07366825b25f182b271f749c0fefb96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.492876   76521 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.key
	I0202 16:14:30.492901   76521 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt with IP's: []
	I0202 16:14:30.611340   76521 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt ...
	I0202 16:14:30.611351   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: {Name:mk597e8eb36e39fe5eb782d910c2a795ce4a85c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.611587   76521 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.key ...
	I0202 16:14:30.611595   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.key: {Name:mk4c6fd0ac37c2a0cc45ea0a4a50517c4d82ccd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.611758   76521 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.key.dd3b5fb2
	I0202 16:14:30.611780   76521 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0202 16:14:30.671577   76521 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.crt.dd3b5fb2 ...
	I0202 16:14:30.671607   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.crt.dd3b5fb2: {Name:mkb323a9a121615fae8e3f4f34f6458744f517dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.671935   76521 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.key.dd3b5fb2 ...
	I0202 16:14:30.672033   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.key.dd3b5fb2: {Name:mka4e80176f2facbd5aec46af9ba95d40acfdb7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.672217   76521 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.crt
	I0202 16:14:30.672374   76521 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.key
	I0202 16:14:30.672513   76521 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.key
	I0202 16:14:30.672536   76521 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.crt with IP's: []
	I0202 16:14:30.709985   76521 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.crt ...
	I0202 16:14:30.709994   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.crt: {Name:mkcc0f966b9f540e707836db291e961af7394c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.710227   76521 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.key ...
	I0202 16:14:30.710235   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.key: {Name:mk2cdce08388cf4023661ab0e9e6a78eceedfea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:14:30.710639   76521 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1679 bytes)
	I0202 16:14:30.710686   76521 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
	I0202 16:14:30.710723   76521 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
	I0202 16:14:30.710759   76521 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1679 bytes)
	I0202 16:14:30.711492   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0202 16:14:30.728082   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0202 16:14:30.743653   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0202 16:14:30.759335   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0202 16:14:30.774884   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0202 16:14:30.790816   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0202 16:14:30.806878   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0202 16:14:30.822373   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0202 16:14:30.837711   76521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0202 16:14:30.854022   76521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0202 16:14:30.865790   76521 ssh_runner.go:195] Run: openssl version
	I0202 16:14:30.873766   76521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0202 16:14:30.882221   76521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0202 16:14:30.885981   76521 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  3 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0202 16:14:30.886032   76521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0202 16:14:30.891562   76521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0202 16:14:30.899308   76521 kubeadm.go:390] StartCluster: {Name:addons-20220202161336-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:addons-20220202161336-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:14:30.899427   76521 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0202 16:14:30.927276   76521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0202 16:14:30.934429   76521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0202 16:14:30.941357   76521 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0202 16:14:30.941407   76521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0202 16:14:30.948223   76521 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0202 16:14:30.948245   76521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0202 16:14:31.423959   76521 out.go:203]   - Generating certificates and keys ...
	I0202 16:14:34.499796   76521 out.go:203]   - Booting up control plane ...
	I0202 16:14:49.023273   76521 out.go:203]   - Configuring RBAC rules ...
	I0202 16:14:49.409227   76521 cni.go:93] Creating CNI manager for ""
	I0202 16:14:49.409239   76521 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 16:14:49.409262   76521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0202 16:14:49.409340   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:49.409346   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82 minikube.k8s.io/name=addons-20220202161336-76172 minikube.k8s.io/updated_at=2022_02_02T16_14_49_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:49.640402   76521 ops.go:34] apiserver oom_adj: -16
	I0202 16:14:49.640439   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:50.197331   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:50.696197   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:51.195278   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:51.695274   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:52.195853   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:52.694332   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:53.198599   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:53.698724   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:54.197348   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:54.697452   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:55.196080   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:55.696162   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:56.195150   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:56.694832   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:57.195022   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:57.693576   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:58.191003   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:58.691193   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:59.191042   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:14:59.691423   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:15:00.197511   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:15:00.697469   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:15:01.194798   76521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 16:15:01.250844   76521 kubeadm.go:1007] duration metric: took 11.841413933s to wait for elevateKubeSystemPrivileges.
	I0202 16:15:01.250865   76521 kubeadm.go:392] StartCluster complete in 30.351154728s
	I0202 16:15:01.250882   76521 settings.go:142] acquiring lock: {Name:mkea0cd61827c3e8cfbcf6e420c5dbfe453193c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:15:01.251048   76521 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 16:15:01.251397   76521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mk472bf8b440ca08b271324870e056290a1de0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:15:01.772297   76521 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20220202161336-76172" rescaled to 1
	I0202 16:15:01.772338   76521 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 16:15:01.772351   76521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0202 16:15:01.772381   76521 addons.go:415] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver gcp-auth ingress ingress-dns helm-tiller]
	I0202 16:15:01.799610   76521 out.go:176] * Verifying Kubernetes components...
	I0202 16:15:01.772502   76521 config.go:176] Loaded profile config "addons-20220202161336-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 16:15:01.799672   76521 addons.go:65] Setting ingress=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799681   76521 addons.go:65] Setting ingress-dns=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799676   76521 addons.go:65] Setting default-storageclass=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799694   76521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 16:15:01.799705   76521 addons.go:65] Setting storage-provisioner=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799706   76521 addons.go:65] Setting helm-tiller=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799709   76521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20220202161336-76172"
	I0202 16:15:01.799696   76521 addons.go:153] Setting addon ingress=true in "addons-20220202161336-76172"
	I0202 16:15:01.799743   76521 addons.go:65] Setting olm=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799747   76521 addons.go:65] Setting gcp-auth=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799772   76521 addons.go:65] Setting registry=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799712   76521 addons.go:153] Setting addon storage-provisioner=true in "addons-20220202161336-76172"
	I0202 16:15:01.799777   76521 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799801   76521 addons.go:153] Setting addon olm=true in "addons-20220202161336-76172"
	I0202 16:15:01.799804   76521 addons.go:153] Setting addon registry=true in "addons-20220202161336-76172"
	I0202 16:15:01.799814   76521 mustload.go:65] Loading cluster: addons-20220202161336-76172
	I0202 16:15:01.799822   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	W0202 16:15:01.799833   76521 addons.go:165] addon storage-provisioner should already be in state true
	I0202 16:15:01.799860   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.799673   76521 addons.go:65] Setting volumesnapshots=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799874   76521 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20220202161336-76172"
	I0202 16:15:01.799873   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.799896   76521 addons.go:153] Setting addon volumesnapshots=true in "addons-20220202161336-76172"
	I0202 16:15:01.799914   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.799920   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.799935   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.799696   76521 addons.go:153] Setting addon ingress-dns=true in "addons-20220202161336-76172"
	I0202 16:15:01.800016   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.799728   76521 addons.go:65] Setting metrics-server=true in profile "addons-20220202161336-76172"
	I0202 16:15:01.799722   76521 addons.go:153] Setting addon helm-tiller=true in "addons-20220202161336-76172"
	I0202 16:15:01.800102   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.800179   76521 addons.go:153] Setting addon metrics-server=true in "addons-20220202161336-76172"
	I0202 16:15:01.800232   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.800243   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:01.800285   76521 config.go:176] Loaded profile config "addons-20220202161336-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 16:15:01.801156   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.801292   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.801317   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.802208   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.821199   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.821206   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.821203   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.822222   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.822326   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.822410   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:01.845169   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:01.845170   76521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0202 16:15:02.107689   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.151489   76521 out.go:176] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0202 16:15:02.188986   76521 out.go:176]   - Using image quay.io/operator-framework/olm
	I0202 16:15:02.302988   76521 out.go:176]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0202 16:15:02.237107   76521 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/controller:v1.1.0
	I0202 16:15:02.240277   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:02.265078   76521 out.go:176]   - Using image quay.io/operatorhubio/catalog
	I0202 16:15:02.303165   76521 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0202 16:15:02.306626   76521 node_ready.go:35] waiting up to 6m0s for node "addons-20220202161336-76172" to be "Ready" ...
	I0202 16:15:02.310754   76521 addons.go:153] Setting addon default-storageclass=true in "addons-20220202161336-76172"
	I0202 16:15:02.346032   76521 out.go:176] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0202 16:15:02.383010   76521 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0202 16:15:02.409986   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0202 16:15:02.410385   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.458029   76521 out.go:176]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0202 16:15:02.491991   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	W0202 16:15:02.492004   76521 addons.go:165] addon default-storageclass should already be in state true
	I0202 16:15:02.492015   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0202 16:15:02.492117   76521 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 16:15:02.496009   76521 node_ready.go:49] node "addons-20220202161336-76172" has status "Ready":"True"
	I0202 16:15:02.510973   76521 addons.go:348] installing /etc/kubernetes/addons/crds.yaml
	I0202 16:15:02.518989   76521 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0202 16:15:02.545049   76521 out.go:176] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 55525 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0202 16:15:02.545166   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:02.545199   76521 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0202 16:15:02.545203   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0202 16:15:02.545253   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.570979   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0202 16:15:02.570999   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0202 16:15:02.596909   76521 out.go:176]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0202 16:15:02.596914   76521 node_ready.go:38] duration metric: took 104.884039ms waiting for node "addons-20220202161336-76172" to be "Ready" ...
	I0202 16:15:02.596923   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/crds.yaml (636901 bytes)
	I0202 16:15:02.596953   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0202 16:15:02.596942   76521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 16:15:02.597016   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.597020   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0202 16:15:02.597059   76521 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0202 16:15:02.597133   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.597149   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.597155   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.609596   76521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-9wx86" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:02.623026   76521 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0202 16:15:02.623068   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0202 16:15:02.623332   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:02.665006   76521 out.go:176] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0202 16:15:02.707055   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0202 16:15:02.708110   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.740548   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0202 16:15:02.782051   76521 out.go:176]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0202 16:15:02.716523   76521 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0202 16:15:02.782121   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (17469 bytes)
	I0202 16:15:02.846308   76521 out.go:176]   - Using image registry:2.7.1
	I0202 16:15:02.795284   76521 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0202 16:15:02.812106   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0202 16:15:02.812254   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.895115   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0202 16:15:02.846520   76521 addons.go:348] installing /etc/kubernetes/addons/registry-rc.yaml
	I0202 16:15:02.895174   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0202 16:15:02.895299   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.946361   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0202 16:15:02.899701   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:02.905727   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:02.913862   76521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0202 16:15:03.014013   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0202 16:15:02.947914   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:02.970442   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:02.971911   76521 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0202 16:15:03.014104   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0202 16:15:02.973258   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:02.973952   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:02.974802   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:03.015029   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:03.062257   76521 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0202 16:15:03.063090   76521 addons.go:348] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0202 16:15:03.063113   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0202 16:15:03.063433   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:03.080033   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:03.102698   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 16:15:03.168317   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:03.207223   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:03.208690   76521 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0202 16:15:03.208707   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0202 16:15:03.237764   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:03.237811   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:03.297158   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0202 16:15:03.362184   76521 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0202 16:15:03.362197   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0202 16:15:03.363647   76521 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0202 16:15:03.363670   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0202 16:15:03.369645   76521 addons.go:348] installing /etc/kubernetes/addons/olm.yaml
	I0202 16:15:03.369661   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/olm.yaml (9994 bytes)
	I0202 16:15:03.373152   76521 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0202 16:15:03.373170   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0202 16:15:03.467453   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0202 16:15:03.471435   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0202 16:15:03.473065   76521 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0202 16:15:03.473078   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0202 16:15:03.482206   76521 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0202 16:15:03.482221   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0202 16:15:03.488620   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0202 16:15:03.568952   76521 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0202 16:15:03.568968   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0202 16:15:03.591584   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0202 16:15:03.664143   76521 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0202 16:15:03.664171   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0202 16:15:03.678697   76521 addons.go:348] installing /etc/kubernetes/addons/registry-svc.yaml
	I0202 16:15:03.678716   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0202 16:15:03.698174   76521 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0202 16:15:03.698187   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0202 16:15:03.778634   76521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0202 16:15:03.783787   76521 addons.go:348] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0202 16:15:03.783805   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0202 16:15:03.789192   76521 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0202 16:15:03.789206   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0202 16:15:03.891280   76521 addons.go:348] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0202 16:15:03.891296   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0202 16:15:03.963316   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0202 16:15:03.966257   76521 addons.go:348] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0202 16:15:03.966282   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0202 16:15:03.979921   76521 addons.go:153] Setting addon gcp-auth=true in "addons-20220202161336-76172"
	I0202 16:15:03.979966   76521 host.go:66] Checking if "addons-20220202161336-76172" exists ...
	I0202 16:15:03.980729   76521 cli_runner.go:133] Run: docker container inspect addons-20220202161336-76172 --format={{.State.Status}}
	I0202 16:15:03.986355   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0202 16:15:04.115075   76521 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0202 16:15:04.115149   76521 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220202161336-76172
	I0202 16:15:04.192032   76521 addons.go:348] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0202 16:15:04.192056   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0202 16:15:04.247583   76521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55527 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/addons-20220202161336-76172/id_rsa Username:docker}
	I0202 16:15:04.263734   76521 addons.go:348] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0202 16:15:04.263753   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0202 16:15:04.392909   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0202 16:15:04.482605   76521 addons.go:348] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0202 16:15:04.482623   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0202 16:15:04.690547   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0202 16:15:04.690564   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0202 16:15:04.768043   76521 pod_ready.go:102] pod "coredns-64897985d-9wx86" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:04.871302   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.40379209s)
	I0202 16:15:05.081395   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0202 16:15:05.081409   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0202 16:15:05.284650   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0202 16:15:05.284663   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0202 16:15:05.490963   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0202 16:15:05.490979   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0202 16:15:05.680061   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0202 16:15:05.680082   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0202 16:15:05.762745   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.291257598s)
	I0202 16:15:05.762796   76521 addons.go:386] Verifying addon ingress=true in "addons-20220202161336-76172"
	I0202 16:15:05.779683   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0202 16:15:05.793235   76521 out.go:176] * Verifying ingress addon...
	I0202 16:15:05.793281   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0202 16:15:05.796610   76521 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0202 16:15:05.860619   76521 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0202 16:15:05.860634   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0202 16:15:05.863902   76521 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0202 16:15:05.863914   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:05.961478   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0202 16:15:06.377800   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:06.871770   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:07.269046   76521 pod_ready.go:102] pod "coredns-64897985d-9wx86" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:07.272228   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (3.783536031s)
	I0202 16:15:07.272253   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.680606856s)
	W0202 16:15:07.272259   76521 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0202 16:15:07.272283   76521 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0202 16:15:07.272352   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.308969324s)
	I0202 16:15:07.272371   76521 addons.go:386] Verifying addon metrics-server=true in "addons-20220202161336-76172"
	I0202 16:15:07.272389   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.28595494s)
	I0202 16:15:07.272402   76521 addons.go:386] Verifying addon registry=true in "addons-20220202161336-76172"
	I0202 16:15:07.272458   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.879488448s)
	W0202 16:15:07.272480   76521 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0202 16:15:07.272492   76521 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0202 16:15:07.272499   76521 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.157364669s)
	I0202 16:15:07.299116   76521 out.go:176] * Verifying registry addon...
	I0202 16:15:07.319520   76521 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I0202 16:15:07.331845   76521 out.go:176]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.8
	I0202 16:15:07.322541   76521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0202 16:15:07.331883   76521 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0202 16:15:07.331890   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0202 16:15:07.363236   76521 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0202 16:15:07.363250   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:07.369685   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:07.380228   76521 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0202 16:15:07.380243   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0202 16:15:07.399959   76521 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0202 16:15:07.399971   76521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4842 bytes)
	I0202 16:15:07.476440   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0202 16:15:07.558672   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0202 16:15:07.642324   76521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0202 16:15:07.869493   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:07.869563   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:08.368330   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:08.368772   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:08.870271   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:08.870400   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:09.377841   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:09.380149   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:09.467905   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.506342271s)
	I0202 16:15:09.467962   76521 addons.go:386] Verifying addon csi-hostpath-driver=true in "addons-20220202161336-76172"
	I0202 16:15:09.494837   76521 out.go:176] * Verifying csi-hostpath-driver addon...
	I0202 16:15:09.534020   76521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0202 16:15:09.563249   76521 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0202 16:15:09.563267   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:09.774016   76521 pod_ready.go:102] pod "coredns-64897985d-9wx86" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:09.870219   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:09.870565   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:10.076823   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:10.081533   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.605030086s)
	I0202 16:15:10.082934   76521 addons.go:386] Verifying addon gcp-auth=true in "addons-20220202161336-76172"
	I0202 16:15:10.111161   76521 out.go:176] * Verifying gcp-auth addon...
	I0202 16:15:10.134072   76521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0202 16:15:10.159426   76521 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0202 16:15:10.159441   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:10.372789   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:10.373711   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:10.580904   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:10.666829   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:10.869759   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:10.870228   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:11.068317   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:11.168275   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:11.369436   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:11.370689   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:11.572239   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:11.667961   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:11.777244   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (4.21848928s)
	I0202 16:15:11.777382   76521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.134979159s)
	I0202 16:15:11.872074   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:11.872310   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:12.068011   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:12.165764   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:12.261957   76521 pod_ready.go:102] pod "coredns-64897985d-9wx86" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:12.372972   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:12.373545   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:12.576223   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:12.663382   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:12.761451   76521 pod_ready.go:92] pod "coredns-64897985d-9wx86" in "kube-system" namespace has status "Ready":"True"
	I0202 16:15:12.761470   76521 pod_ready.go:81] duration metric: took 10.054164772s waiting for pod "coredns-64897985d-9wx86" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:12.761484   76521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-g5hn9" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:12.872666   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:12.873329   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:13.067372   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:13.167589   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:13.368314   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:13.368401   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:13.567734   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:13.662190   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:13.874932   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:13.874941   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:14.068361   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:14.162593   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:14.370620   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:14.371051   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:14.568401   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:14.663593   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:14.784440   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:14.869335   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:14.869813   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:15.068284   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:15.166212   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:15.368145   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:15.368700   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:15.568155   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:15.662630   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:15.869485   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:15.870820   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:16.073530   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:16.162452   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:16.373280   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:16.373542   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:16.568232   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:16.671181   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:16.873915   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:16.873999   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:17.071266   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:17.162795   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:17.283736   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:17.370756   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:17.370870   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:17.570038   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:17.663547   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:17.867301   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:17.868194   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:18.150119   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:18.163263   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:18.372641   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:18.372767   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:18.573564   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:18.663755   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:18.867181   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:18.867253   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:19.069752   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:19.163186   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:19.371797   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:19.371874   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:19.573563   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:19.662490   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:19.783328   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:19.867362   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:19.867478   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:20.073052   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:20.162785   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:20.367047   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:20.367418   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:20.568538   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:20.663686   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:20.874309   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:20.874324   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:21.070171   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:21.165135   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:21.376046   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:21.376824   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:21.568281   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:21.667821   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:21.785886   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:21.867967   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:21.868487   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:22.073966   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:22.165826   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:22.368833   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:22.369712   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:22.568965   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:22.665865   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:22.872723   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:22.872798   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:23.077129   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:23.165641   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:23.368387   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:23.368428   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:23.572729   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:23.666125   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:23.870440   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:23.870533   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:24.073672   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:24.165160   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:24.283614   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:24.367567   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:24.368259   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:24.570818   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:24.665134   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:24.867427   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:24.867519   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:25.068525   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:25.172535   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:25.368611   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:25.368892   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:25.573241   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:25.664928   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:25.867469   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:25.867602   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:26.068890   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:26.170116   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:26.367741   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:26.367903   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:26.572417   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:26.667795   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:26.783919   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:26.868004   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:26.868141   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:27.068473   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:27.165191   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:27.369821   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:27.370388   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:27.568218   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:27.664497   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:27.902197   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:27.902438   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:28.068887   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:28.169153   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:28.372442   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:28.372522   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:28.570051   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:28.664807   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:28.792191   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:28.867996   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:28.868014   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:29.072250   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:29.167072   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:29.367796   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:29.367811   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:29.570271   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:29.669526   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:29.868253   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:29.868311   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:30.078127   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:30.166003   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:30.368888   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:30.369209   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:30.568614   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:30.665093   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:30.867488   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:30.868036   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:31.071021   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:31.165245   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:31.276027   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:31.369424   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:31.369992   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:31.575960   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:31.670361   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:31.867438   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:31.867626   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:32.073238   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:32.165025   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:32.367676   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:32.368269   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:32.568041   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:32.664796   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:32.876796   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:32.880790   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:33.067845   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:33.167501   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:33.284946   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:33.367638   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:33.368164   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:33.572153   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:33.670506   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:33.868830   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:33.868901   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:34.071017   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:34.167159   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:34.372018   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:34.372024   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:34.570360   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:34.666366   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:34.869810   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:34.870989   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:35.075655   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:35.166948   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:35.286632   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:35.373170   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:35.373234   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:35.569396   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:35.669174   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:35.870985   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:35.872864   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:36.069693   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:36.165281   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:36.368858   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:36.369422   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:36.570361   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:36.670566   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:36.869594   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:36.870178   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:37.076179   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:37.165064   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:37.371544   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:37.371859   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:37.568690   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:37.665035   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:37.780403   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:37.868554   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:37.868733   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:38.073468   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:38.165021   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:38.368828   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:38.368914   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:38.568684   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:38.664821   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:38.869714   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:38.869951   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:39.070250   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:39.168853   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:39.368409   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:39.368436   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:39.573741   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:39.667167   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:39.783869   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:39.868493   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:39.870758   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:40.068519   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:40.165334   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:40.369860   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:40.371033   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:40.568954   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:40.669600   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:40.868821   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:40.869138   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:41.074683   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:41.165826   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:41.369413   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:41.369418   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:41.576788   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:41.667824   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:41.784565   76521 pod_ready.go:102] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"False"
	I0202 16:15:41.869264   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:41.869341   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:42.077365   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:42.165219   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:42.284802   76521 pod_ready.go:92] pod "coredns-64897985d-g5hn9" in "kube-system" namespace has status "Ready":"True"
	I0202 16:15:42.284814   76521 pod_ready.go:81] duration metric: took 29.52292564s waiting for pod "coredns-64897985d-g5hn9" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.284820   76521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.289738   76521 pod_ready.go:92] pod "etcd-addons-20220202161336-76172" in "kube-system" namespace has status "Ready":"True"
	I0202 16:15:42.289748   76521 pod_ready.go:81] duration metric: took 4.924232ms waiting for pod "etcd-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.289755   76521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.295076   76521 pod_ready.go:92] pod "kube-apiserver-addons-20220202161336-76172" in "kube-system" namespace has status "Ready":"True"
	I0202 16:15:42.295086   76521 pod_ready.go:81] duration metric: took 5.326505ms waiting for pod "kube-apiserver-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.295093   76521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.299955   76521 pod_ready.go:92] pod "kube-controller-manager-addons-20220202161336-76172" in "kube-system" namespace has status "Ready":"True"
	I0202 16:15:42.299965   76521 pod_ready.go:81] duration metric: took 4.866359ms waiting for pod "kube-controller-manager-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.299971   76521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw58s" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.304550   76521 pod_ready.go:92] pod "kube-proxy-zw58s" in "kube-system" namespace has status "Ready":"True"
	I0202 16:15:42.304560   76521 pod_ready.go:81] duration metric: took 4.584989ms waiting for pod "kube-proxy-zw58s" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.304566   76521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.375442   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:42.375579   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:42.571174   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:42.668088   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:42.681856   76521 pod_ready.go:92] pod "kube-scheduler-addons-20220202161336-76172" in "kube-system" namespace has status "Ready":"True"
	I0202 16:15:42.681866   76521 pod_ready.go:81] duration metric: took 377.289971ms waiting for pod "kube-scheduler-addons-20220202161336-76172" in "kube-system" namespace to be "Ready" ...
	I0202 16:15:42.681871   76521 pod_ready.go:38] duration metric: took 40.084335147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 16:15:42.681889   76521 api_server.go:51] waiting for apiserver process to appear ...
	I0202 16:15:42.681949   76521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0202 16:15:42.733816   76521 api_server.go:71] duration metric: took 40.960912127s to wait for apiserver process to appear ...
	I0202 16:15:42.733836   76521 api_server.go:87] waiting for apiserver healthz status ...
	I0202 16:15:42.733850   76521 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55526/healthz ...
	I0202 16:15:42.740745   76521 api_server.go:266] https://127.0.0.1:55526/healthz returned 200:
	ok
	I0202 16:15:42.742366   76521 api_server.go:140] control plane version: v1.23.2
	I0202 16:15:42.742378   76521 api_server.go:130] duration metric: took 8.537268ms to wait for apiserver health ...
	I0202 16:15:42.742384   76521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0202 16:15:42.869489   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:42.870784   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:42.886640   76521 system_pods.go:59] 19 kube-system pods found
	I0202 16:15:42.886666   76521 system_pods.go:61] "coredns-64897985d-g5hn9" [bc49300b-d95c-41dd-bb83-f57141d52e29] Running
	I0202 16:15:42.886675   76521 system_pods.go:61] "csi-hostpath-attacher-0" [4d43ad81-8de5-473c-8b16-449b1c51882c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0202 16:15:42.886680   76521 system_pods.go:61] "csi-hostpath-provisioner-0" [3986e719-9d47-4a86-a230-0e018a491067] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0202 16:15:42.886688   76521 system_pods.go:61] "csi-hostpath-resizer-0" [98b976a8-12c1-4c9c-b1fe-1495b0f7e17f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0202 16:15:42.886695   76521 system_pods.go:61] "csi-hostpath-snapshotter-0" [fdd0fa23-a312-4b3c-b0f7-7920c7c6ad7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0202 16:15:42.886703   76521 system_pods.go:61] "csi-hostpathplugin-0" [817ce823-a845-443b-87d2-06733d14ca38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0202 16:15:42.886711   76521 system_pods.go:61] "etcd-addons-20220202161336-76172" [b9ce9f3d-08f1-414a-9ac7-19fcb033b9cd] Running
	I0202 16:15:42.886715   76521 system_pods.go:61] "kube-apiserver-addons-20220202161336-76172" [bf326373-3141-4d8e-a288-974508bf379f] Running
	I0202 16:15:42.886719   76521 system_pods.go:61] "kube-controller-manager-addons-20220202161336-76172" [8973dab2-a2f2-499e-9309-861aad9b8847] Running
	I0202 16:15:42.886723   76521 system_pods.go:61] "kube-ingress-dns-minikube" [e0c1ac1f-d8b2-431a-bdd3-7f4f53913b0e] Running
	I0202 16:15:42.886735   76521 system_pods.go:61] "kube-proxy-zw58s" [18ffb898-4180-4163-a80b-728ab8ce88e8] Running
	I0202 16:15:42.886741   76521 system_pods.go:61] "kube-scheduler-addons-20220202161336-76172" [30f274a1-0b0b-4933-8709-96e60ff086d9] Running
	I0202 16:15:42.886746   76521 system_pods.go:61] "metrics-server-6b76bd68b6-k8qtb" [69e334f4-7756-4177-8e4b-5e16226362d5] Running
	I0202 16:15:42.886750   76521 system_pods.go:61] "registry-2flh5" [6e619784-6af9-41ef-a4ef-5024d805ac23] Running
	I0202 16:15:42.886753   76521 system_pods.go:61] "registry-proxy-n52x4" [b425d15d-9d83-4418-9ed5-21c8e923628e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0202 16:15:42.886759   76521 system_pods.go:61] "snapshot-controller-7f76975c56-5fbqg" [cb07960c-ce7a-4382-86cc-766e2d1f9b0c] Running
	I0202 16:15:42.886777   76521 system_pods.go:61] "snapshot-controller-7f76975c56-gn45m" [394df5d0-80af-487a-b27c-90bbc161fd56] Running
	I0202 16:15:42.886780   76521 system_pods.go:61] "storage-provisioner" [a92a8e59-5e34-4ff0-80da-d8cae48b579a] Running
	I0202 16:15:42.886783   76521 system_pods.go:61] "tiller-deploy-6d67d5465d-xk6zs" [c7ca8081-d3ba-4140-9328-5c0b504abbb4] Running
	I0202 16:15:42.886788   76521 system_pods.go:74] duration metric: took 144.397941ms to wait for pod list to return data ...
	I0202 16:15:42.886794   76521 default_sa.go:34] waiting for default service account to be created ...
	I0202 16:15:43.073324   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:43.081889   76521 default_sa.go:45] found service account: "default"
	I0202 16:15:43.081900   76521 default_sa.go:55] duration metric: took 195.098885ms for default service account to be created ...
	I0202 16:15:43.081908   76521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0202 16:15:43.166793   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:43.289164   76521 system_pods.go:86] 19 kube-system pods found
	I0202 16:15:43.289186   76521 system_pods.go:89] "coredns-64897985d-g5hn9" [bc49300b-d95c-41dd-bb83-f57141d52e29] Running
	I0202 16:15:43.289202   76521 system_pods.go:89] "csi-hostpath-attacher-0" [4d43ad81-8de5-473c-8b16-449b1c51882c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0202 16:15:43.289216   76521 system_pods.go:89] "csi-hostpath-provisioner-0" [3986e719-9d47-4a86-a230-0e018a491067] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0202 16:15:43.289225   76521 system_pods.go:89] "csi-hostpath-resizer-0" [98b976a8-12c1-4c9c-b1fe-1495b0f7e17f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0202 16:15:43.289236   76521 system_pods.go:89] "csi-hostpath-snapshotter-0" [fdd0fa23-a312-4b3c-b0f7-7920c7c6ad7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0202 16:15:43.289244   76521 system_pods.go:89] "csi-hostpathplugin-0" [817ce823-a845-443b-87d2-06733d14ca38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0202 16:15:43.289253   76521 system_pods.go:89] "etcd-addons-20220202161336-76172" [b9ce9f3d-08f1-414a-9ac7-19fcb033b9cd] Running
	I0202 16:15:43.289260   76521 system_pods.go:89] "kube-apiserver-addons-20220202161336-76172" [bf326373-3141-4d8e-a288-974508bf379f] Running
	I0202 16:15:43.289267   76521 system_pods.go:89] "kube-controller-manager-addons-20220202161336-76172" [8973dab2-a2f2-499e-9309-861aad9b8847] Running
	I0202 16:15:43.289272   76521 system_pods.go:89] "kube-ingress-dns-minikube" [e0c1ac1f-d8b2-431a-bdd3-7f4f53913b0e] Running
	I0202 16:15:43.289284   76521 system_pods.go:89] "kube-proxy-zw58s" [18ffb898-4180-4163-a80b-728ab8ce88e8] Running
	I0202 16:15:43.289289   76521 system_pods.go:89] "kube-scheduler-addons-20220202161336-76172" [30f274a1-0b0b-4933-8709-96e60ff086d9] Running
	I0202 16:15:43.289298   76521 system_pods.go:89] "metrics-server-6b76bd68b6-k8qtb" [69e334f4-7756-4177-8e4b-5e16226362d5] Running
	I0202 16:15:43.289303   76521 system_pods.go:89] "registry-2flh5" [6e619784-6af9-41ef-a4ef-5024d805ac23] Running
	I0202 16:15:43.289313   76521 system_pods.go:89] "registry-proxy-n52x4" [b425d15d-9d83-4418-9ed5-21c8e923628e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0202 16:15:43.289318   76521 system_pods.go:89] "snapshot-controller-7f76975c56-5fbqg" [cb07960c-ce7a-4382-86cc-766e2d1f9b0c] Running
	I0202 16:15:43.289323   76521 system_pods.go:89] "snapshot-controller-7f76975c56-gn45m" [394df5d0-80af-487a-b27c-90bbc161fd56] Running
	I0202 16:15:43.289331   76521 system_pods.go:89] "storage-provisioner" [a92a8e59-5e34-4ff0-80da-d8cae48b579a] Running
	I0202 16:15:43.289336   76521 system_pods.go:89] "tiller-deploy-6d67d5465d-xk6zs" [c7ca8081-d3ba-4140-9328-5c0b504abbb4] Running
	I0202 16:15:43.289342   76521 system_pods.go:126] duration metric: took 207.425764ms to wait for k8s-apps to be running ...
	I0202 16:15:43.289348   76521 system_svc.go:44] waiting for kubelet service to be running ....
	I0202 16:15:43.289416   76521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 16:15:43.300849   76521 system_svc.go:56] duration metric: took 11.496805ms WaitForService to wait for kubelet.
	I0202 16:15:43.300865   76521 kubeadm.go:547] duration metric: took 41.527956455s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0202 16:15:43.300879   76521 node_conditions.go:102] verifying NodePressure condition ...
	I0202 16:15:43.367949   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:43.369661   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:43.481912   76521 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0202 16:15:43.481928   76521 node_conditions.go:123] node cpu capacity is 6
	I0202 16:15:43.481937   76521 node_conditions.go:105] duration metric: took 181.053102ms to run NodePressure ...
	I0202 16:15:43.481943   76521 start.go:213] waiting for startup goroutines ...
	I0202 16:15:43.569199   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:43.666885   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:43.868935   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:43.870133   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:44.069407   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:44.166993   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:44.371006   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0202 16:15:44.371051   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:44.569331   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:44.670247   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:44.867538   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:44.867984   76521 kapi.go:108] duration metric: took 37.544938356s to wait for kubernetes.io/minikube-addons=registry ...
	I0202 16:15:45.069945   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:45.165979   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:45.368331   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:45.568480   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:45.666580   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:45.933809   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:46.070477   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:46.172220   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:46.373299   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:46.569028   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:46.666382   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:46.870087   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:47.068075   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:47.166601   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:47.367997   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:47.572218   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:47.666613   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:47.868239   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:48.069362   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:48.166947   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:48.368806   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:48.567990   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:48.668422   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:48.873872   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:49.068560   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:49.164856   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:49.371771   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:49.568405   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:49.667090   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:49.868002   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:50.073040   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:50.165620   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:50.369342   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:50.572583   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:50.665050   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:50.868834   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:51.069289   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:51.170614   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:51.373353   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:51.572915   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:51.668811   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:51.868595   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:52.068817   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:52.169521   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:52.367651   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:52.570675   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:52.665755   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:52.871933   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:53.069938   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:53.165381   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:53.368405   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:53.570335   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:53.665288   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:53.868242   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:54.068519   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:54.167155   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:54.373196   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:54.573367   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:54.667674   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:54.870127   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:55.070497   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:55.169868   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:55.371555   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:55.569490   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:55.668952   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:55.867814   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:56.073821   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:56.165424   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:56.372482   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:56.568148   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:56.667973   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:56.868619   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:57.073157   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:57.166753   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:57.368540   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:57.574123   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:57.663371   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:57.868771   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:58.069854   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:58.165374   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:58.369621   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:58.574065   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:58.671570   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:58.869550   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:59.073189   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:59.166046   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:59.369287   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:15:59.572351   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:15:59.668204   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:15:59.868833   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:00.069864   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:00.166345   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:00.367836   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:00.572107   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:00.665351   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:00.869825   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:01.068366   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:01.167813   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:01.368778   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:01.569261   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:01.668574   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:01.874337   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:02.069014   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:02.168154   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:02.369462   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:02.571379   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:02.665865   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:02.873961   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:03.069581   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:03.164282   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:03.369458   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:03.570150   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:03.664285   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:03.868313   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:04.068976   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:04.164666   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:04.368488   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:04.569985   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:04.664837   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:04.877436   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:05.073188   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:05.165958   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:05.369469   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:05.570464   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:05.665907   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:05.870165   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:06.075548   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:06.166406   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:06.370595   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:06.573805   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:06.666011   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:06.872754   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:07.071903   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:07.167200   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:07.378252   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:07.572671   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:07.672388   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:07.877525   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:08.073546   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:08.170698   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:08.372992   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:08.576759   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:08.668943   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:08.873427   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:09.077967   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:09.172368   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:09.373911   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:09.578048   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:09.671662   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:09.875261   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:10.079126   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:10.173403   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:10.375371   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:10.577033   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:10.670643   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:10.875030   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:11.077728   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:11.175520   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:11.380287   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:11.576270   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:11.670403   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:11.879956   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:12.080181   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:12.173065   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:12.381100   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:12.580881   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:12.671282   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:12.880346   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:13.083133   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:13.173459   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:13.378239   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:13.589135   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:13.677275   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:13.877336   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:14.079615   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:14.176647   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:14.387544   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:14.577907   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:14.679682   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:14.878312   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:15.080034   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:15.178351   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:15.379308   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:15.583721   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:15.680735   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:15.887374   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:16.080465   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:16.176545   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:16.385509   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:16.581059   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:16.674760   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:16.883775   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:17.083988   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:17.180247   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:17.389137   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:17.580901   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:17.680507   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:17.887082   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:18.081010   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:18.176578   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:18.380728   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:18.580810   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:18.679857   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:18.883101   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:19.080993   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:19.182141   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:19.382186   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:19.581770   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:19.680710   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:19.881553   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:20.082907   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:20.177179   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:20.381491   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:20.585564   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:20.794979   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:20.880877   76521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0202 16:16:21.083305   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:21.179225   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:21.422408   76521 kapi.go:108] duration metric: took 1m15.611820381s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0202 16:16:21.581870   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:21.677404   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:22.090601   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:22.184115   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:22.586285   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:22.681883   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:23.084549   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:23.182238   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:23.586099   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:23.677260   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:24.083399   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:24.182378   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:24.589137   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:24.678395   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:25.089860   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:25.183251   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:25.583674   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:25.679784   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:26.084826   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:26.181134   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:26.583148   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:26.678573   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:27.089141   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:27.180156   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:27.588406   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:27.688640   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:28.086673   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:28.181701   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:28.585493   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:28.681433   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:29.085734   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:29.183267   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:29.588241   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:29.682803   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:30.085052   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:30.184350   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0202 16:16:30.584178   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:30.681877   76521 kapi.go:108] duration metric: took 1m20.530895097s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0202 16:16:30.710843   76521 out.go:176] * Your GCP credentials will now be mounted into every pod created in the addons-20220202161336-76172 cluster.
	I0202 16:16:30.735144   76521 out.go:176] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0202 16:16:30.761345   76521 out.go:176] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0202 16:16:31.088191   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:31.593637   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:32.086486   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:32.586494   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:33.086584   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:33.585328   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:34.089225   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:34.585932   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:35.089133   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:35.586951   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:36.086010   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:36.587439   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:37.093941   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:37.586793   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:38.086854   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:38.592075   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:39.086475   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:39.589148   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:40.090955   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:40.586954   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:41.094843   76521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0202 16:16:41.588951   76521 kapi.go:108] duration metric: took 1m32.036081669s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0202 16:16:41.616938   76521 out.go:176] * Enabled addons: storage-provisioner, ingress-dns, helm-tiller, default-storageclass, metrics-server, olm, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0202 16:16:41.616954   76521 addons.go:417] enableAddons completed in 1m39.825636161s
	I0202 16:16:41.680703   76521 start.go:496] kubectl: 1.19.7, cluster: 1.23.2 (minor skew: 4)
	I0202 16:16:41.706118   76521 out.go:176] 
	W0202 16:16:41.706282   76521 out.go:241] ! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2.
	I0202 16:16:41.753048   76521 out.go:176]   - Want kubectl v1.23.2? Try 'minikube kubectl -- get pods -A'
	I0202 16:16:41.779054   76521 out.go:176] * Done! kubectl is now configured to use "addons-20220202161336-76172" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-02-03 00:13:57 UTC, end at Thu 2022-02-03 00:22:32 UTC. --
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.008746770Z" level=info msg="ignoring event" container=06f5a481b9c85b3123df037c36f065f2f0d8b83611c0e37bca3da4ecfb67d110 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.116661101Z" level=info msg="ignoring event" container=0603d65ef693f64ce558c0d6136e6ad24b19b559c4335ca92b5951058779f12f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.120987355Z" level=info msg="ignoring event" container=4b081612b4cfa431349b65f30ab65c50351cdbca88561cd90feea8a8e3f09799 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.190505565Z" level=info msg="ignoring event" container=bf1438b39377e2c3b383ae8706b55b20936043803019c8e93dce4e75a15e9127 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.190557109Z" level=info msg="ignoring event" container=2c97c235fa87f4fe2722b9f53afe58c7cef4253b19452a512602d4372ecd2a76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.190573723Z" level=info msg="ignoring event" container=5833aeed447bab600fc887271a99b7a617531a63ecb08569753d10f596452e05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.191028527Z" level=info msg="ignoring event" container=4f4f127a0b2b923c4633a8d32c3ad5b092e4b912dac2f63e7de1384f9fd72a01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.208668711Z" level=info msg="ignoring event" container=ce3a58ff31ba5f0855458d54ab7f32f38666a1a70df76af6b0aed0526f36719a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.218998371Z" level=info msg="ignoring event" container=655d6c242d4763f9523c0a01d40b3290b6fac4ccc0714fa71fb2f9c02d1af8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.300254472Z" level=info msg="ignoring event" container=510164491e8c988e1177445e3f038a7be837f28921109ffd68b2cfd86154d8f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.307198058Z" level=info msg="ignoring event" container=e9fddeda8bf074969cb44dc2f37060444f08429ab45f51aee45ac10c35c01688 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.313825576Z" level=info msg="ignoring event" container=2495a9a0534a9d708b25f25376195688a16e07d529718424246669d013c5db6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:24 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:24.327448038Z" level=info msg="ignoring event" container=13e4d9f21048501831cb405b65ef06c23d5f2b90da81f9686aaca41fb16adbcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:30 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:30.816053158Z" level=info msg="ignoring event" container=5eec668c3fe8f1b1024f3122cc4e4645fefba53158493e821b1d2bf3187dd4f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:30 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:30.833629891Z" level=info msg="ignoring event" container=994c29b8ca3901117d4af458c051ee82317f54b861ece61187499650ae8d9c3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:30 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:30.922597834Z" level=info msg="ignoring event" container=343df2c78086cf55b261da2db7c4f81c55cb5ac547531da433adfc6651e4ffbb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:30 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:30.949145656Z" level=info msg="ignoring event" container=124c0d324b87e3c54d231a474a79f84677c535f207df6e5ea44ba2c9a434df0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:17:51 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:51.089723986Z" level=warning msg="reference for unknown type: " digest="sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4" remote="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 03 00:17:51 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:17:51.706145470Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
	Feb 03 00:19:21 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:19:21.069329205Z" level=warning msg="reference for unknown type: " digest="sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4" remote="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 03 00:19:21 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:19:21.676356188Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
	Feb 03 00:22:12 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:22:12.920712941Z" level=warning msg="reference for unknown type: " digest="sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4" remote="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 03 00:22:13 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:22:13.522717929Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
	Feb 03 00:22:30 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:22:30.612661696Z" level=info msg="ignoring event" container=9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 00:22:30 addons-20220202161336-76172 dockerd[469]: time="2022-02-03T00:22:30.655785644Z" level=info msg="ignoring event" container=b0daeab83ea4f07b4616b834f33a6b79b2e65cba4f5c69c44cbd6e4b9a3d0a24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID
	3446cdb19e63a       nginx@sha256:da9c94bec1da829ebd52431a84502ec471c8e548ffb2cedbf36260fd9bd1d4d3                                           4 minutes ago       Running             nginx                     0                   954fc2d8803a1
	a78b51ec2f3a8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:26c7b2454f1c946d7c80839251d939606620f37c2f275be2796c1ffd96c438f6            6 minutes ago       Running             gcp-auth                  0                   b9cc6b4ea8a4a
	962ffa913eb78       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  6 minutes ago       Running             packageserver             0                   4346308bf5b41
	27b186087ae2b       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  6 minutes ago       Running             packageserver             0                   1e236600d7f08
	58255cae03e82       k8s.gcr.io/ingress-nginx/controller@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a             6 minutes ago       Running             controller                0                   44556996af99f
	a32ed9824c371       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  6 minutes ago       Running             olm-operator              0                   6a6eb84486a84
	9f19d608d621f       gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da    6 minutes ago       Running             registry-proxy            0                   7673062e5d794
	d03b246757bb1       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                  6 minutes ago       Running             catalog-operator          0                   903e428dc3d45
	53f650183cbdb       6e38f40d628db                                                                                                           6 minutes ago       Running             storage-provisioner       1                   b5b74641ecf54
	887acb53ff928       c41e9fcadf5a2                                                                                                           7 minutes ago       Exited              patch                     1                   b4e13edcaf536
	cb199662118ea       k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660   7 minutes ago       Exited              create                    0                   d7ae5aee4da7d
	50fbab25f3213       registry@sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa                                        7 minutes ago       Running             registry                  0                   70e4d16e5e30c
	32f2c9aed0803       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f        7 minutes ago       Running             minikube-ingress-dns      0                   91f2e1846d8c9
	58be7005faa30       6e38f40d628db                                                                                                           7 minutes ago       Exited              storage-provisioner       0                   b5b74641ecf54
	8cccb2151bf84       a4ca41631cc7a                                                                                                           7 minutes ago       Running             coredns                   0                   c0eb3db32f932
	78314818c7769       d922ca3da64b3                                                                                                           7 minutes ago       Running             kube-proxy                0                   07a4c19784e1a
	c0f7c6224693e       8a0228dd6a683                                                                                                           7 minutes ago       Running             kube-apiserver            0                   38a40f1a6c0ab
	9d0648766709f       4783639ba7e03                                                                                                           7 minutes ago       Running             kube-controller-manager   0                   eb9e3a8f0d773
	0a53434ae32f9       6114d758d6d16                                                                                                           7 minutes ago       Running             kube-scheduler            0                   303607b7e469f
	d21d7aeb74b27       25f8c7f3da61c                                                                                                           7 minutes ago       Running             etcd                      0                   3f3cc3f6b3fcb
	
	* 
	* ==> coredns [8cccb2151bf8] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20220202161336-76172
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20220202161336-76172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82
	                    minikube.k8s.io/name=addons-20220202161336-76172
	                    minikube.k8s.io/updated_at=2022_02_02T16_14_49_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20220202161336-76172
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 03 Feb 2022 00:14:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20220202161336-76172
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 03 Feb 2022 00:22:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 03 Feb 2022 00:17:53 +0000   Thu, 03 Feb 2022 00:14:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 03 Feb 2022 00:17:53 +0000   Thu, 03 Feb 2022 00:14:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 03 Feb 2022 00:17:53 +0000   Thu, 03 Feb 2022 00:14:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 03 Feb 2022 00:17:53 +0000   Thu, 03 Feb 2022 00:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20220202161336-76172
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                31423f05-f2ca-4091-8631-1c81bf98076a
	  Boot ID:                    3dce2c91-bcef-4be1-84c8-042fa5532ce9
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.12
	  Kubelet Version:            v1.23.2
	  Kube-Proxy Version:         v1.23.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  gcp-auth                    gcp-auth-59b76855d9-h6rpz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  ingress-nginx               ingress-nginx-controller-6d5f55986b-5z44z              100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m27s
	  kube-system                 coredns-64897985d-g5hn9                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     7m30s
	  kube-system                 etcd-addons-20220202161336-76172                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-apiserver-addons-20220202161336-76172             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-controller-manager-addons-20220202161336-76172    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-proxy-zw58s                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  kube-system                 kube-scheduler-addons-20220202161336-76172             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 registry-2flh5                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 registry-proxy-n52x4                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  olm                         catalog-operator-755d759b4b-clptc                      10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m25s
	  olm                         olm-operator-c755654d4-9m4zw                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m25s
	  olm                         operatorhubio-catalog-ntfcf                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         6m54s
	  olm                         packageserver-57c69dc557-7chf2                         10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         6m47s
	  olm                         packageserver-57c69dc557-g6cs2                         10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                900m (15%!)(MISSING)   0 (0%!)(MISSING)
	  memory             650Mi (10%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                                            Age    From        Message
	  ----     ------                                            ----   ----        -------
	  Normal   Starting                                          7m27s  kube-proxy  
	  Warning  listen tcp4 :31490: bind: address already in use  7m16s  kube-proxy  can't open port "nodePort for ingress-nginx/ingress-nginx-controller:https" (:31490/tcp4), skipping it
	  Warning  listen tcp4 :30489: bind: address already in use  7m16s  kube-proxy  can't open port "nodePort for ingress-nginx/ingress-nginx-controller:http" (:30489/tcp4), skipping it
	  Normal   NodeHasSufficientPID                              7m43s  kubelet     Node addons-20220202161336-76172 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced                           7m43s  kubelet     Updated Node Allocatable limit across pods
	  Normal   Starting                                          7m43s  kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory                           7m43s  kubelet     Node addons-20220202161336-76172 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure                             7m43s  kubelet     Node addons-20220202161336-76172 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                                         7m33s  kubelet     Node addons-20220202161336-76172 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.024767] bpfilter: read fail 0
	[  +0.034298] bpfilter: read fail 0
	[  +0.022685] bpfilter: read fail 0
	[  +0.040716] bpfilter: write fail -32
	[  +0.027465] bpfilter: write fail -32
	[  +0.027336] bpfilter: read fail 0
	[  +0.036718] bpfilter: read fail 0
	[  +0.026922] bpfilter: read fail 0
	[  +0.023750] bpfilter: read fail 0
	[  +0.025242] bpfilter: read fail 0
	[  +0.033087] bpfilter: read fail 0
	[  +0.036832] bpfilter: read fail 0
	[  +0.026319] bpfilter: read fail 0
	[  +0.019665] bpfilter: read fail 0
	[  +0.027041] bpfilter: write fail -32
	[  +0.027316] bpfilter: read fail 0
	[  +0.026793] bpfilter: read fail 0
	[  +0.031656] bpfilter: read fail 0
	[  +0.028272] bpfilter: read fail 0
	[  +0.033371] bpfilter: read fail 0
	[  +0.038650] bpfilter: read fail 0
	[  +0.026926] bpfilter: read fail 0
	[  +0.028060] bpfilter: read fail 0
	[  +0.036245] bpfilter: read fail 0
	[  +0.034662] bpfilter: read fail 0
	
	* 
	* ==> etcd [d21d7aeb74b2] <==
	* {"level":"info","ts":"2022-02-03T00:14:43.815Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-02-03T00:14:43.815Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-02-03T00:14:43.815Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-02-03T00:14:44.300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-02-03T00:14:44.300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-02-03T00:14:44.300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-02-03T00:14:44.300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-02-03T00:14:44.300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-02-03T00:14:44.300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-02-03T00:14:44.300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-02-03T00:14:44.301Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-20220202161336-76172 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-02-03T00:14:44.301Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-02-03T00:14:44.301Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-02-03T00:14:44.301Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-02-03T00:14:44.301Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-02-03T00:14:44.302Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-02-03T00:14:44.302Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-02-03T00:14:44.302Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-03T00:14:44.304Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-03T00:14:44.304Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-03T00:14:44.304Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2022-02-03T00:16:20.793Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"117.577097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:9921"}
	{"level":"info","ts":"2022-02-03T00:16:20.793Z","caller":"traceutil/trace.go:171","msg":"trace[396962209] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1239; }","duration":"117.689392ms","start":"2022-02-03T00:16:20.675Z","end":"2022-02-03T00:16:20.793Z","steps":["trace[396962209] 'range keys from in-memory index tree'  (duration: 117.406249ms)"],"step_count":1}
	{"level":"warn","ts":"2022-02-03T00:16:20.793Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.970942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" ","response":"range_response_count:1 size:2846"}
	{"level":"info","ts":"2022-02-03T00:16:20.793Z","caller":"traceutil/trace.go:171","msg":"trace[1280481906] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:1; response_revision:1239; }","duration":"132.038355ms","start":"2022-02-03T00:16:20.661Z","end":"2022-02-03T00:16:20.793Z","steps":["trace[1280481906] 'range keys from in-memory index tree'  (duration: 131.896606ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:22:33 up 10 min,  0 users,  load average: 0.52, 0.73, 0.53
	Linux addons-20220202161336-76172 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [c0f7c6224693] <==
	* E0203 00:16:21.857494       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1: Get "https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.101.154.226:5443: connect: connection refused
	E0203 00:16:22.018545       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1: Get "https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.101.154.226:5443: connect: connection refused
	E0203 00:16:22.313720       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1: Get "https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.101.154.226:5443: connect: connection refused
	E0203 00:16:22.339919       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1: Get "https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.101.154.226:5443: connect: connection refused
	W0203 00:16:22.708270       1 handler_proxy.go:104] no RequestInfo found in the context
	E0203 00:16:22.708387       1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0203 00:16:22.708440       1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
	E0203 00:16:23.327972       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1: Get "https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.101.154.226:5443: connect: connection refused
	E0203 00:16:23.621573       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1: Get "https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.101.154.226:5443: connect: connection refused
	W0203 00:16:23.845500       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	E0203 00:16:23.845536       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	W0203 00:16:23.853741       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	E0203 00:16:23.853776       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	E0203 00:16:28.743013       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1: Get "https://10.101.154.226:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.101.154.226:5443: connect: connection refused
	W0203 00:16:33.827865       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	E0203 00:16:33.827902       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	W0203 00:16:33.838810       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	E0203 00:16:33.838845       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.155.175:443: connect: connection refused
	I0203 00:17:08.148787       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0203 00:17:31.237944       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	W0203 00:17:31.514491       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0203 00:17:31.532932       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0203 00:17:31.700922       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	I0203 00:17:31.962667       1 alloc.go:329] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.102.31.96]
	
	* 
	* ==> kube-controller-manager [9d0648766709] <==
	* E0203 00:19:23.653789       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:19:43.717778       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:19:43.717795       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:19:47.709820       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:19:47.709854       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:19:55.690666       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:19:55.690737       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:20:36.277349       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:20:36.277423       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:20:40.451046       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:20:40.451082       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:20:44.060124       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:20:44.060161       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:21:25.187230       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:21:25.187268       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:21:27.341506       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:21:27.341542       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:21:33.646166       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:21:33.646201       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:22:07.791298       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:22:07.791333       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:22:08.019302       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:22:08.019338       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0203 00:22:22.023289       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 00:22:22.023305       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [78314818c776] <==
	* I0203 00:15:03.812771       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0203 00:15:03.812865       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0203 00:15:03.812882       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0203 00:15:05.810064       1 server_others.go:206] "Using iptables Proxier"
	I0203 00:15:05.810243       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0203 00:15:05.810272       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0203 00:15:05.810292       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0203 00:15:05.810834       1 server.go:656] "Version info" version="v1.23.2"
	I0203 00:15:05.812038       1 config.go:317] "Starting service config controller"
	I0203 00:15:05.812070       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0203 00:15:05.812260       1 config.go:226] "Starting endpoint slice config controller"
	I0203 00:15:05.812266       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0203 00:15:05.912358       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0203 00:15:05.912389       1 shared_informer.go:247] Caches are synced for service config 
	E0203 00:15:15.979390       1 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :31490: bind: address already in use" port={Description:nodePort for ingress-nginx/ingress-nginx-controller:https IP: IPFamily:4 Port:31490 Protocol:TCP}
	E0203 00:15:15.979486       1 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :30489: bind: address already in use" port={Description:nodePort for ingress-nginx/ingress-nginx-controller:http IP: IPFamily:4 Port:30489 Protocol:TCP}
	
	* 
	* ==> kube-scheduler [0a53434ae32f] <==
	* W0203 00:14:46.216658       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0203 00:14:46.216685       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0203 00:14:46.216098       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0203 00:14:46.216794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0203 00:14:46.217004       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0203 00:14:46.217315       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0203 00:14:46.217048       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0203 00:14:46.217448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0203 00:14:47.022464       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0203 00:14:47.022480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0203 00:14:47.045886       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0203 00:14:47.045936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0203 00:14:47.048174       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0203 00:14:47.048203       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0203 00:14:47.068151       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0203 00:14:47.068200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0203 00:14:47.208758       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0203 00:14:47.208774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0203 00:14:47.314036       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0203 00:14:47.314075       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0203 00:14:47.336481       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0203 00:14:47.336518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0203 00:14:47.337544       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0203 00:14:47.337577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 00:14:49.713335       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-02-03 00:13:57 UTC, end at Thu 2022-02-03 00:22:33 UTC. --
	Feb 03 00:20:26 addons-20220202161336-76172 kubelet[1957]: E0203 00:20:26.681605    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:20:40 addons-20220202161336-76172 kubelet[1957]: E0203 00:20:40.681477    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:20:54 addons-20220202161336-76172 kubelet[1957]: E0203 00:20:54.660346    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:21:08 addons-20220202161336-76172 kubelet[1957]: E0203 00:21:08.660759    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:21:19 addons-20220202161336-76172 kubelet[1957]: E0203 00:21:19.639259    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:21:34 addons-20220202161336-76172 kubelet[1957]: E0203 00:21:34.638556    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:21:45 addons-20220202161336-76172 kubelet[1957]: E0203 00:21:45.617680    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:22:00 addons-20220202161336-76172 kubelet[1957]: E0203 00:22:00.617840    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:22:13 addons-20220202161336-76172 kubelet[1957]: E0203 00:22:13.524332    1957 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown" image="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 03 00:22:13 addons-20220202161336-76172 kubelet[1957]: E0203 00:22:13.524376    1957 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown" image="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Feb 03 00:22:13 addons-20220202161336-76172 kubelet[1957]: E0203 00:22:13.524468    1957 kuberuntime_manager.go:918] container &Container{Name:registry-server,Image:quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {<nil>} 10m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rc5xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGraceP
eriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod operatorhubio-catalog-ntfcf_olm(54a5ae42-fc2d-4757-9c9b-df185000a397): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be7
38b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown
	Feb 03 00:22:13 addons-20220202161336-76172 kubelet[1957]: E0203 00:22:13.524494    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:22:24 addons-20220202161336-76172 kubelet[1957]: E0203 00:22:24.596603    1957 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-ntfcf" podUID=54a5ae42-fc2d-4757-9c9b-df185000a397
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:30.829456    1957 scope.go:110] "RemoveContainer" containerID="9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214"
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:30.851022    1957 scope.go:110] "RemoveContainer" containerID="9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214"
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: E0203 00:22:30.851675    1957 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214" containerID="9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214"
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:30.851742    1957 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214} err="failed to get container status \"9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214\": rpc error: code = Unknown desc = Error: No such container: 9b87ea29d6d6b539565c8d3ada7c4622e10b0ef5c9f36577f9386e8a6bb95214"
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:30.911971    1957 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xhjk\" (UniqueName: \"kubernetes.io/projected/69e334f4-7756-4177-8e4b-5e16226362d5-kube-api-access-2xhjk\") pod \"69e334f4-7756-4177-8e4b-5e16226362d5\" (UID: \"69e334f4-7756-4177-8e4b-5e16226362d5\") "
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:30.912060    1957 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69e334f4-7756-4177-8e4b-5e16226362d5-tmp-dir\") pod \"69e334f4-7756-4177-8e4b-5e16226362d5\" (UID: \"69e334f4-7756-4177-8e4b-5e16226362d5\") "
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: W0203 00:22:30.912240    1957 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/69e334f4-7756-4177-8e4b-5e16226362d5/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:30.912332    1957 operation_generator.go:909] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e334f4-7756-4177-8e4b-5e16226362d5-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "69e334f4-7756-4177-8e4b-5e16226362d5" (UID: "69e334f4-7756-4177-8e4b-5e16226362d5"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Feb 03 00:22:30 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:30.913854    1957 operation_generator.go:909] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e334f4-7756-4177-8e4b-5e16226362d5-kube-api-access-2xhjk" (OuterVolumeSpecName: "kube-api-access-2xhjk") pod "69e334f4-7756-4177-8e4b-5e16226362d5" (UID: "69e334f4-7756-4177-8e4b-5e16226362d5"). InnerVolumeSpecName "kube-api-access-2xhjk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 03 00:22:31 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:31.012241    1957 reconciler.go:295] "Volume detached for volume \"kube-api-access-2xhjk\" (UniqueName: \"kubernetes.io/projected/69e334f4-7756-4177-8e4b-5e16226362d5-kube-api-access-2xhjk\") on node \"addons-20220202161336-76172\" DevicePath \"\""
	Feb 03 00:22:31 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:31.012284    1957 reconciler.go:295] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69e334f4-7756-4177-8e4b-5e16226362d5-tmp-dir\") on node \"addons-20220202161336-76172\" DevicePath \"\""
	Feb 03 00:22:32 addons-20220202161336-76172 kubelet[1957]: I0203 00:22:32.611491    1957 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=69e334f4-7756-4177-8e4b-5e16226362d5 path="/var/lib/kubelet/pods/69e334f4-7756-4177-8e4b-5e16226362d5/volumes"
	
	* 
	* ==> storage-provisioner [53f650183cbd] <==
	* I0203 00:15:36.865235       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 00:15:36.874407       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 00:15:36.874452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0203 00:15:36.881733       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0203 00:15:36.881973       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20220202161336-76172_12f79ace-af14-4382-b376-f2a1fabbbcec!
	I0203 00:15:36.881854       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6595d80a-0d92-4718-a02e-7b1dd31e80e1", APIVersion:"v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20220202161336-76172_12f79ace-af14-4382-b376-f2a1fabbbcec became leader
	I0203 00:15:36.982864       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20220202161336-76172_12f79ace-af14-4382-b376-f2a1fabbbcec!
	
	* 
	* ==> storage-provisioner [58be7005faa3] <==
	* I0203 00:15:05.877118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0203 00:15:35.863478       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-20220202161336-76172 -n addons-20220202161336-76172
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20220202161336-76172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: ingress-nginx-admission-create-njc8n ingress-nginx-admission-patch-6b8ck operatorhubio-catalog-ntfcf
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20220202161336-76172 describe pod ingress-nginx-admission-create-njc8n ingress-nginx-admission-patch-6b8ck operatorhubio-catalog-ntfcf
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20220202161336-76172 describe pod ingress-nginx-admission-create-njc8n ingress-nginx-admission-patch-6b8ck operatorhubio-catalog-ntfcf: exit status 1 (56.210607ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-njc8n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6b8ck" not found
	Error from server (NotFound): pods "operatorhubio-catalog-ntfcf" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20220202161336-76172 describe pod ingress-nginx-admission-create-njc8n ingress-nginx-admission-patch-6b8ck operatorhubio-catalog-ntfcf: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (328.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (554.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0202 17:31:41.976450   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 80 (9m14.672315469s)

                                                
                                                
-- stdout --
	* [calico-20220202171134-76172] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20220202171134-76172 in cluster calico-20220202171134-76172
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 17:31:35.627867   93004 out.go:297] Setting OutFile to fd 1 ...
	I0202 17:31:35.628009   93004 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 17:31:35.628016   93004 out.go:310] Setting ErrFile to fd 2...
	I0202 17:31:35.628020   93004 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 17:31:35.628095   93004 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 17:31:35.628427   93004 out.go:304] Setting JSON to false
	I0202 17:31:35.657683   93004 start.go:112] hostinfo: {"hostname":"37309.local","uptime":32470,"bootTime":1643819425,"procs":369,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 17:31:35.657791   93004 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0202 17:31:35.683797   93004 out.go:176] * [calico-20220202171134-76172] minikube v1.25.1 on Darwin 11.2.3
	I0202 17:31:35.683979   93004 notify.go:174] Checking for updates...
	I0202 17:31:35.731551   93004 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 17:31:35.757546   93004 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 17:31:35.783314   93004 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0202 17:31:35.809482   93004 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 17:31:35.835459   93004 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 17:31:35.835910   93004 config.go:176] Loaded profile config "cilium-20220202171134-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:31:35.835952   93004 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 17:31:35.935840   93004 docker.go:132] docker version: linux-20.10.6
	I0202 17:31:35.935994   93004 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 17:31:36.126643   93004 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 01:31:36.069539647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 17:31:36.153436   93004 out.go:176] * Using the docker driver based on user configuration
	I0202 17:31:36.153476   93004 start.go:281] selected driver: docker
	I0202 17:31:36.153487   93004 start.go:798] validating driver "docker" against <nil>
	I0202 17:31:36.153504   93004 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0202 17:31:36.156274   93004 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 17:31:36.345964   93004 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 01:31:36.287651829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 17:31:36.346079   93004 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0202 17:31:36.346207   93004 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0202 17:31:36.346227   93004 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0202 17:31:36.346244   93004 cni.go:93] Creating CNI manager for "calico"
	I0202 17:31:36.346250   93004 start_flags.go:297] Found "Calico" CNI - setting NetworkPlugin=cni
	I0202 17:31:36.346258   93004 start_flags.go:302] config:
	{Name:calico-20220202171134-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 17:31:36.428646   93004 out.go:176] * Starting control plane node calico-20220202171134-76172 in cluster calico-20220202171134-76172
	I0202 17:31:36.428717   93004 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 17:31:36.460727   93004 out.go:176] * Pulling base image ...
	I0202 17:31:36.460771   93004 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:31:36.460801   93004 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 17:31:36.460834   93004 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 17:31:36.460849   93004 cache.go:57] Caching tarball of preloaded images
	I0202 17:31:36.460983   93004 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0202 17:31:36.460997   93004 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0202 17:31:36.461685   93004 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/config.json ...
	I0202 17:31:36.461781   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/config.json: {Name:mk5ac6f4f343ec1a4a60c9c42ff7649324014b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:31:36.604517   93004 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 17:31:36.604542   93004 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 17:31:36.604554   93004 cache.go:208] Successfully downloaded all kic artifacts
	I0202 17:31:36.604591   93004 start.go:313] acquiring machines lock for calico-20220202171134-76172: {Name:mk5aeef1fb6d49c80b18c9f711f2d8c11cef84a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 17:31:36.605223   93004 start.go:317] acquired machines lock for "calico-20220202171134-76172" in 620.004µs
	I0202 17:31:36.605258   93004 start.go:89] Provisioning new machine with config: &{Name:calico-20220202171134-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 17:31:36.605365   93004 start.go:126] createHost starting for "" (driver="docker")
	I0202 17:31:36.671261   93004 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0202 17:31:36.671491   93004 start.go:160] libmachine.API.Create for "calico-20220202171134-76172" (driver="docker")
	I0202 17:31:36.671521   93004 client.go:168] LocalClient.Create starting
	I0202 17:31:36.671628   93004 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem
	I0202 17:31:36.671672   93004 main.go:130] libmachine: Decoding PEM data...
	I0202 17:31:36.671690   93004 main.go:130] libmachine: Parsing certificate...
	I0202 17:31:36.671752   93004 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem
	I0202 17:31:36.671780   93004 main.go:130] libmachine: Decoding PEM data...
	I0202 17:31:36.671789   93004 main.go:130] libmachine: Parsing certificate...
	I0202 17:31:36.672228   93004 cli_runner.go:133] Run: docker network inspect calico-20220202171134-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0202 17:31:36.798902   93004 cli_runner.go:180] docker network inspect calico-20220202171134-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0202 17:31:36.799023   93004 network_create.go:254] running [docker network inspect calico-20220202171134-76172] to gather additional debugging logs...
	I0202 17:31:36.799051   93004 cli_runner.go:133] Run: docker network inspect calico-20220202171134-76172
	W0202 17:31:36.919002   93004 cli_runner.go:180] docker network inspect calico-20220202171134-76172 returned with exit code 1
	I0202 17:31:36.919026   93004 network_create.go:257] error running [docker network inspect calico-20220202171134-76172]: docker network inspect calico-20220202171134-76172: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220202171134-76172
	I0202 17:31:36.919042   93004 network_create.go:259] output of [docker network inspect calico-20220202171134-76172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220202171134-76172
	
	** /stderr **
	I0202 17:31:36.919145   93004 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0202 17:31:37.039189   93004 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000130c90] misses:0}
	I0202 17:31:37.039229   93004 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:31:37.039244   93004 network_create.go:106] attempt to create docker network calico-20220202171134-76172 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0202 17:31:37.039325   93004 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220202171134-76172
	W0202 17:31:37.156681   93004 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220202171134-76172 returned with exit code 1
	W0202 17:31:37.156719   93004 network_create.go:98] failed to create docker network calico-20220202171134-76172 192.168.49.0/24, will retry: subnet is taken
	I0202 17:31:37.156955   93004 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000130c90] amended:false}} dirty:map[] misses:0}
	I0202 17:31:37.156977   93004 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:31:37.157159   93004 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000130c90] amended:true}} dirty:map[192.168.49.0:0xc000130c90 192.168.58.0:0xc000130d00] misses:0}
	I0202 17:31:37.157172   93004 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:31:37.157178   93004 network_create.go:106] attempt to create docker network calico-20220202171134-76172 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0202 17:31:37.157253   93004 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220202171134-76172
	I0202 17:31:39.803859   93004 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220202171134-76172: (2.646489464s)
	I0202 17:31:39.803885   93004 network_create.go:90] docker network calico-20220202171134-76172 192.168.58.0/24 created
	I0202 17:31:39.803899   93004 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20220202171134-76172" container
	I0202 17:31:39.804018   93004 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0202 17:31:39.945511   93004 cli_runner.go:133] Run: docker volume create calico-20220202171134-76172 --label name.minikube.sigs.k8s.io=calico-20220202171134-76172 --label created_by.minikube.sigs.k8s.io=true
	I0202 17:31:40.091676   93004 oci.go:102] Successfully created a docker volume calico-20220202171134-76172
	I0202 17:31:40.091839   93004 cli_runner.go:133] Run: docker run --rm --name calico-20220202171134-76172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220202171134-76172 --entrypoint /usr/bin/test -v calico-20220202171134-76172:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0202 17:31:40.954514   93004 oci.go:106] Successfully prepared a docker volume calico-20220202171134-76172
	I0202 17:31:40.954570   93004 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:31:40.954588   93004 kic.go:179] Starting extracting preloaded images to volume ...
	I0202 17:31:40.954771   93004 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220202171134-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0202 17:31:47.293208   93004 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220202171134-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (6.338141532s)
	I0202 17:31:47.293276   93004 kic.go:188] duration metric: took 6.338510 seconds to extract preloaded images to volume
	I0202 17:31:47.293833   93004 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0202 17:31:47.548435   93004 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220202171134-76172 --name calico-20220202171134-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220202171134-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220202171134-76172 --network calico-20220202171134-76172 --ip 192.168.58.2 --volume calico-20220202171134-76172:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0202 17:31:57.078217   93004 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220202171134-76172 --name calico-20220202171134-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220202171134-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220202171134-76172 --network calico-20220202171134-76172 --ip 192.168.58.2 --volume calico-20220202171134-76172:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (9.529374904s)
	I0202 17:31:57.078568   93004 cli_runner.go:133] Run: docker container inspect calico-20220202171134-76172 --format={{.State.Running}}
	I0202 17:31:57.211059   93004 cli_runner.go:133] Run: docker container inspect calico-20220202171134-76172 --format={{.State.Status}}
	I0202 17:31:57.339161   93004 cli_runner.go:133] Run: docker exec calico-20220202171134-76172 stat /var/lib/dpkg/alternatives/iptables
	I0202 17:31:57.529792   93004 oci.go:281] the created container "calico-20220202171134-76172" has a running status.
	I0202 17:31:57.529825   93004 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa...
	I0202 17:31:57.812165   93004 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0202 17:31:58.008915   93004 cli_runner.go:133] Run: docker container inspect calico-20220202171134-76172 --format={{.State.Status}}
	I0202 17:31:58.142904   93004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0202 17:31:58.142930   93004 kic_runner.go:114] Args: [docker exec --privileged calico-20220202171134-76172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0202 17:31:58.317074   93004 cli_runner.go:133] Run: docker container inspect calico-20220202171134-76172 --format={{.State.Status}}
	I0202 17:31:58.439782   93004 machine.go:88] provisioning docker machine ...
	I0202 17:31:58.439823   93004 ubuntu.go:169] provisioning hostname "calico-20220202171134-76172"
	I0202 17:31:58.439938   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:31:58.561216   93004 main.go:130] libmachine: Using SSH client type: native
	I0202 17:31:58.561447   93004 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 53284 <nil> <nil>}
	I0202 17:31:58.561471   93004 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220202171134-76172 && echo "calico-20220202171134-76172" | sudo tee /etc/hostname
	I0202 17:31:58.714504   93004 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220202171134-76172
	
	I0202 17:31:58.715753   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:31:58.840262   93004 main.go:130] libmachine: Using SSH client type: native
	I0202 17:31:58.840414   93004 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 53284 <nil> <nil>}
	I0202 17:31:58.840427   93004 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220202171134-76172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220202171134-76172/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220202171134-76172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0202 17:31:58.976645   93004 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0202 17:31:58.976678   93004 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
	I0202 17:31:58.976707   93004 ubuntu.go:177] setting up certificates
	I0202 17:31:58.976722   93004 provision.go:83] configureAuth start
	I0202 17:31:58.976822   93004 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220202171134-76172
	I0202 17:31:59.118005   93004 provision.go:138] copyHostCerts
	I0202 17:31:59.118095   93004 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
	I0202 17:31:59.118104   93004 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
	I0202 17:31:59.118203   93004 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
	I0202 17:31:59.118433   93004 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
	I0202 17:31:59.118446   93004 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
	I0202 17:31:59.118513   93004 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
	I0202 17:31:59.118654   93004 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
	I0202 17:31:59.118660   93004 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
	I0202 17:31:59.118754   93004 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1679 bytes)
	I0202 17:31:59.118871   93004 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.calico-20220202171134-76172 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220202171134-76172]
	I0202 17:31:59.202897   93004 provision.go:172] copyRemoteCerts
	I0202 17:31:59.202967   93004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0202 17:31:59.203042   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:31:59.324171   93004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa Username:docker}
	I0202 17:31:59.418882   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0202 17:31:59.438053   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0202 17:31:59.456493   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0202 17:31:59.474194   93004 provision.go:86] duration metric: configureAuth took 497.444946ms
	I0202 17:31:59.474207   93004 ubuntu.go:193] setting minikube options for container-runtime
	I0202 17:31:59.474367   93004 config.go:176] Loaded profile config "calico-20220202171134-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:31:59.474450   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:31:59.594537   93004 main.go:130] libmachine: Using SSH client type: native
	I0202 17:31:59.594686   93004 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 53284 <nil> <nil>}
	I0202 17:31:59.594702   93004 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0202 17:31:59.728591   93004 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0202 17:31:59.728605   93004 ubuntu.go:71] root file system type: overlay
	I0202 17:31:59.728767   93004 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0202 17:31:59.728863   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:31:59.848937   93004 main.go:130] libmachine: Using SSH client type: native
	I0202 17:31:59.849118   93004 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 53284 <nil> <nil>}
	I0202 17:31:59.849172   93004 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0202 17:31:59.996096   93004 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0202 17:31:59.996232   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:00.119907   93004 main.go:130] libmachine: Using SSH client type: native
	I0202 17:32:00.120052   93004 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 53284 <nil> <nil>}
	I0202 17:32:00.120067   93004 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0202 17:32:13.503658   93004 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-03 01:32:00.007270073 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0202 17:32:13.503678   93004 machine.go:91] provisioned docker machine in 15.063491442s
	I0202 17:32:13.503684   93004 client.go:171] LocalClient.Create took 36.831173133s
	I0202 17:32:13.503698   93004 start.go:168] duration metric: libmachine.API.Create for "calico-20220202171134-76172" took 36.831222112s
	I0202 17:32:13.503708   93004 start.go:267] post-start starting for "calico-20220202171134-76172" (driver="docker")
	I0202 17:32:13.503712   93004 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0202 17:32:13.504435   93004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0202 17:32:13.504670   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:13.624549   93004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa Username:docker}
	I0202 17:32:13.723406   93004 ssh_runner.go:195] Run: cat /etc/os-release
	I0202 17:32:13.727243   93004 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0202 17:32:13.727261   93004 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0202 17:32:13.727280   93004 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0202 17:32:13.727290   93004 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0202 17:32:13.727304   93004 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
	I0202 17:32:13.727404   93004 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
	I0202 17:32:13.728510   93004 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem -> 761722.pem in /etc/ssl/certs
	I0202 17:32:13.728698   93004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0202 17:32:13.736677   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem --> /etc/ssl/certs/761722.pem (1708 bytes)
	I0202 17:32:13.754135   93004 start.go:270] post-start completed in 250.412083ms
	I0202 17:32:13.754682   93004 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220202171134-76172
	I0202 17:32:13.876648   93004 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/config.json ...
	I0202 17:32:13.877069   93004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 17:32:13.877132   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:14.008325   93004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa Username:docker}
	I0202 17:32:14.103118   93004 start.go:129] duration metric: createHost completed in 37.496741862s
	I0202 17:32:14.103140   93004 start.go:80] releasing machines lock for "calico-20220202171134-76172", held for 37.496905865s
	I0202 17:32:14.103272   93004 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220202171134-76172
	I0202 17:32:14.225260   93004 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0202 17:32:14.225268   93004 ssh_runner.go:195] Run: systemctl --version
	I0202 17:32:14.225345   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:14.225377   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:14.354618   93004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa Username:docker}
	I0202 17:32:14.354618   93004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa Username:docker}
	I0202 17:32:14.447999   93004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0202 17:32:14.943558   93004 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 17:32:14.953409   93004 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0202 17:32:14.953469   93004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0202 17:32:14.963299   93004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0202 17:32:14.976391   93004 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0202 17:32:15.038776   93004 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0202 17:32:15.098799   93004 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 17:32:15.111333   93004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0202 17:32:15.168431   93004 ssh_runner.go:195] Run: sudo systemctl start docker
	I0202 17:32:15.178456   93004 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 17:32:15.219278   93004 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 17:32:15.307334   93004 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0202 17:32:15.307534   93004 cli_runner.go:133] Run: docker exec -t calico-20220202171134-76172 dig +short host.docker.internal
	I0202 17:32:15.492093   93004 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0202 17:32:15.492183   93004 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0202 17:32:15.496990   93004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 17:32:15.506999   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:15.652017   93004 out.go:176]   - kubelet.housekeeping-interval=5m
	I0202 17:32:15.652147   93004 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:32:15.652294   93004 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 17:32:15.685540   93004 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 17:32:15.685556   93004 docker.go:537] Images already preloaded, skipping extraction
	I0202 17:32:15.685666   93004 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 17:32:15.718960   93004 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 17:32:15.718979   93004 cache_images.go:84] Images are preloaded, skipping loading
	I0202 17:32:15.719097   93004 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0202 17:32:15.797862   93004 cni.go:93] Creating CNI manager for "calico"
	I0202 17:32:15.797893   93004 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0202 17:32:15.797917   93004 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220202171134-76172 NodeName:calico-20220202171134-76172 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0202 17:32:15.798063   93004 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220202171134-76172"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0202 17:32:15.798171   93004 kubeadm.go:931] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220202171134-76172 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:calico-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0202 17:32:15.798246   93004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0202 17:32:15.806881   93004 binaries.go:44] Found k8s binaries, skipping transfer
	I0202 17:32:15.806964   93004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0202 17:32:15.814392   93004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (401 bytes)
	I0202 17:32:15.827728   93004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0202 17:32:15.840173   93004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0202 17:32:15.852599   93004 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0202 17:32:15.856427   93004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 17:32:15.866312   93004 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172 for IP: 192.168.58.2
	I0202 17:32:15.866444   93004 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
	I0202 17:32:15.866510   93004 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
	I0202 17:32:15.866566   93004 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/client.key
	I0202 17:32:15.866590   93004 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/client.crt with IP's: []
	I0202 17:32:15.965585   93004 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/client.crt ...
	I0202 17:32:15.965605   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/client.crt: {Name:mkfe90203d4bbba784db78123d15f405eb11636b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:15.967224   93004 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/client.key ...
	I0202 17:32:15.967234   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/client.key: {Name:mkc64020d1b14cfd62a077dd2161e65757f0f2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:15.967745   93004 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.key.cee25041
	I0202 17:32:15.967766   93004 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0202 17:32:16.259220   93004 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.crt.cee25041 ...
	I0202 17:32:16.259235   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.crt.cee25041: {Name:mkc5a2ec13a523de4527cad361ca05a4fc84ae21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:16.260481   93004 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.key.cee25041 ...
	I0202 17:32:16.260490   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.key.cee25041: {Name:mk53d34da6eeb25f74b147309471edcbe30bf5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:16.261109   93004 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.crt
	I0202 17:32:16.261263   93004 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.key
	I0202 17:32:16.261414   93004 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.key
	I0202 17:32:16.261432   93004 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.crt with IP's: []
	I0202 17:32:16.314297   93004 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.crt ...
	I0202 17:32:16.314309   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.crt: {Name:mkf78c7518f08a19848a9d81cbb3af2425745f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:16.315571   93004 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.key ...
	I0202 17:32:16.315580   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.key: {Name:mk82dfc9a2eb50fa8a43638bb08eb842738fdafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:16.316333   93004 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172.pem (1338 bytes)
	W0202 17:32:16.316383   93004 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172_empty.pem, impossibly tiny 0 bytes
	I0202 17:32:16.316404   93004 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1679 bytes)
	I0202 17:32:16.316455   93004 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
	I0202 17:32:16.316494   93004 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
	I0202 17:32:16.316530   93004 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1679 bytes)
	I0202 17:32:16.316604   93004 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem (1708 bytes)
	I0202 17:32:16.317910   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0202 17:32:16.335754   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0202 17:32:16.352434   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0202 17:32:16.370456   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202171134-76172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0202 17:32:16.389050   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0202 17:32:16.406829   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0202 17:32:16.424043   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0202 17:32:16.441711   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0202 17:32:16.458558   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem --> /usr/share/ca-certificates/761722.pem (1708 bytes)
	I0202 17:32:16.475606   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0202 17:32:16.492937   93004 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172.pem --> /usr/share/ca-certificates/76172.pem (1338 bytes)
	I0202 17:32:16.509947   93004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0202 17:32:16.523100   93004 ssh_runner.go:195] Run: openssl version
	I0202 17:32:16.529059   93004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/761722.pem && ln -fs /usr/share/ca-certificates/761722.pem /etc/ssl/certs/761722.pem"
	I0202 17:32:16.537108   93004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/761722.pem
	I0202 17:32:16.541684   93004 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  3 00:25 /usr/share/ca-certificates/761722.pem
	I0202 17:32:16.541736   93004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/761722.pem
	I0202 17:32:16.547638   93004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/761722.pem /etc/ssl/certs/3ec20f2e.0"
	I0202 17:32:16.555625   93004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0202 17:32:16.564089   93004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:32:16.568594   93004 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  3 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:32:16.568647   93004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:32:16.574393   93004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0202 17:32:16.582634   93004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76172.pem && ln -fs /usr/share/ca-certificates/76172.pem /etc/ssl/certs/76172.pem"
	I0202 17:32:16.592410   93004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76172.pem
	I0202 17:32:16.596902   93004 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  3 00:25 /usr/share/ca-certificates/76172.pem
	I0202 17:32:16.596964   93004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76172.pem
	I0202 17:32:16.602658   93004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/76172.pem /etc/ssl/certs/51391683.0"
	I0202 17:32:16.610787   93004 kubeadm.go:390] StartCluster: {Name:calico-20220202171134-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse}
	I0202 17:32:16.610902   93004 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0202 17:32:16.640976   93004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0202 17:32:16.650323   93004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0202 17:32:16.657926   93004 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0202 17:32:16.657978   93004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0202 17:32:16.665451   93004 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0202 17:32:16.665473   93004 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0202 17:32:17.248109   93004 out.go:203]   - Generating certificates and keys ...
	I0202 17:32:19.446329   93004 out.go:203]   - Booting up control plane ...
	I0202 17:32:35.008173   93004 out.go:203]   - Configuring RBAC rules ...
	I0202 17:32:35.447808   93004 cni.go:93] Creating CNI manager for "calico"
	I0202 17:32:35.498279   93004 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0202 17:32:35.498419   93004 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
	I0202 17:32:35.498428   93004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0202 17:32:35.531857   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0202 17:32:36.834636   93004 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.302718651s)
	I0202 17:32:36.834666   93004 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0202 17:32:36.834721   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:36.834722   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82 minikube.k8s.io/name=calico-20220202171134-76172 minikube.k8s.io/updated_at=2022_02_02T17_32_36_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:36.931133   93004 ops.go:34] apiserver oom_adj: -16
	I0202 17:32:36.931160   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:37.547709   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:38.048006   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:38.547989   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:39.048917   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:39.550985   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:40.048304   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:40.554516   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:41.048647   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:41.547658   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:42.047894   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:42.556667   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:43.056758   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:43.547664   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:44.051648   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:44.548063   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:45.054468   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:45.549258   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:46.053504   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:46.556923   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:47.056786   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:47.555953   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:48.055863   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:48.555601   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:49.054974   93004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:32:49.243674   93004 kubeadm.go:1007] duration metric: took 12.408685392s to wait for elevateKubeSystemPrivileges.
	I0202 17:32:49.243696   93004 kubeadm.go:392] StartCluster complete in 32.632108188s
	I0202 17:32:49.243715   93004 settings.go:142] acquiring lock: {Name:mkea0cd61827c3e8cfbcf6e420c5dbfe453193c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:49.243804   93004 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 17:32:49.244847   93004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mk472bf8b440ca08b271324870e056290a1de0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:32:49.794226   93004 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220202171134-76172" rescaled to 1
	I0202 17:32:49.794269   93004 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 17:32:49.794304   93004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0202 17:32:49.822085   93004 out.go:176] * Verifying Kubernetes components...
	I0202 17:32:49.794321   93004 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0202 17:32:49.794481   93004 config.go:176] Loaded profile config "calico-20220202171134-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:32:49.822147   93004 addons.go:65] Setting storage-provisioner=true in profile "calico-20220202171134-76172"
	I0202 17:32:49.822148   93004 addons.go:65] Setting default-storageclass=true in profile "calico-20220202171134-76172"
	I0202 17:32:49.822170   93004 addons.go:153] Setting addon storage-provisioner=true in "calico-20220202171134-76172"
	I0202 17:32:49.822178   93004 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220202171134-76172"
	W0202 17:32:49.822180   93004 addons.go:165] addon storage-provisioner should already be in state true
	I0202 17:32:49.822218   93004 host.go:66] Checking if "calico-20220202171134-76172" exists ...
	I0202 17:32:49.822544   93004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 17:32:49.823417   93004 cli_runner.go:133] Run: docker container inspect calico-20220202171134-76172 --format={{.State.Status}}
	I0202 17:32:49.847468   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:49.865307   93004 cli_runner.go:133] Run: docker container inspect calico-20220202171134-76172 --format={{.State.Status}}
	I0202 17:32:49.885191   93004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0202 17:32:49.999325   93004 addons.go:153] Setting addon default-storageclass=true in "calico-20220202171134-76172"
	W0202 17:32:49.999339   93004 addons.go:165] addon default-storageclass should already be in state true
	I0202 17:32:49.999357   93004 host.go:66] Checking if "calico-20220202171134-76172" exists ...
	I0202 17:32:49.999972   93004 cli_runner.go:133] Run: docker container inspect calico-20220202171134-76172 --format={{.State.Status}}
	I0202 17:32:50.009933   93004 node_ready.go:35] waiting up to 5m0s for node "calico-20220202171134-76172" to be "Ready" ...
	I0202 17:32:50.015185   93004 node_ready.go:49] node "calico-20220202171134-76172" has status "Ready":"True"
	I0202 17:32:50.015196   93004 node_ready.go:38] duration metric: took 5.231085ms waiting for node "calico-20220202171134-76172" to be "Ready" ...
	I0202 17:32:50.015202   93004 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 17:32:50.033111   93004 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace to be "Ready" ...
	I0202 17:32:50.088995   93004 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0202 17:32:50.089171   93004 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 17:32:50.089186   93004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0202 17:32:50.089290   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:50.179621   93004 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0202 17:32:50.179641   93004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0202 17:32:50.179713   93004 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202171134-76172
	I0202 17:32:50.244783   93004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa Username:docker}
	I0202 17:32:50.324092   93004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202171134-76172/id_rsa Username:docker}
	I0202 17:32:50.407683   93004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 17:32:50.514123   93004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0202 17:32:51.315016   93004 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.429739678s)
	I0202 17:32:51.315037   93004 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0202 17:32:51.365279   93004 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0202 17:32:51.365305   93004 addons.go:417] enableAddons completed in 1.570972563s
	I0202 17:32:52.125998   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:32:54.615870   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:32:56.617206   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:32:58.623176   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:01.110548   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:03.112727   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:05.609729   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:08.114766   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:10.612599   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:13.108572   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:15.110814   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:17.114149   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:19.612099   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:22.108300   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:24.109392   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:26.109947   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:28.111084   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:30.621825   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:33.109985   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:35.111255   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:37.118119   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:39.615833   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:42.110383   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:44.110655   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:46.113089   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:48.615127   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:50.620740   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:53.110794   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:55.615679   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:33:58.109809   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:00.111480   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:02.111962   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:04.114710   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:06.609371   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:08.610103   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:10.611926   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:13.110740   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:15.128890   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:17.624883   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:20.117712   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:22.118179   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:24.616417   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:27.114486   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:29.618050   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:32.111836   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:34.115772   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:36.125643   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:38.617711   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:41.119576   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:43.610393   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:45.612375   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:48.112347   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:50.113892   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:52.610337   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:54.610887   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:56.613346   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:34:59.113742   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:01.616137   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:04.116086   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:06.611526   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:08.611948   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:11.112090   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:13.611815   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:15.613751   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:18.123982   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:20.622186   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:23.115812   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:25.611884   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:28.120200   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:30.611643   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:33.118056   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:35.119635   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:37.623030   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:40.111697   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:42.113079   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:44.117467   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:46.124558   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:48.619511   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:51.123647   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:53.612622   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:55.625358   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:35:58.117812   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:00.119135   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:02.614089   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:05.112359   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:07.115920   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:09.119209   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:11.613217   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:13.614196   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:16.119068   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:18.613704   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:21.115353   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:23.613588   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:26.119355   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:28.620415   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:31.114872   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:33.614852   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:36.113464   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:38.116805   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:40.119336   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:42.614595   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:44.616583   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:47.113845   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:49.114642   93004 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:50.120414   93004 pod_ready.go:81] duration metric: took 4m0.025474662s waiting for pod "calico-kube-controllers-8594699699-7dzcs" in "kube-system" namespace to be "Ready" ...
	E0202 17:36:50.120432   93004 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0202 17:36:50.120454   93004 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-hlxzw" in "kube-system" namespace to be "Ready" ...
	I0202 17:36:52.138889   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:54.638483   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:57.137498   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:36:59.638316   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:01.638615   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:03.640749   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:06.141137   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:08.142082   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:10.631702   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:12.637397   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:15.136945   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:17.637229   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:19.639214   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:22.134054   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:24.637507   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:27.133456   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:29.137629   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:31.640345   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:34.136623   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:36.638615   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:39.132114   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:41.133811   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:43.135561   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:45.138016   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:47.639320   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:49.639736   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:52.139910   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:54.639310   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:56.640228   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:37:59.138663   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:01.639028   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:03.639496   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:06.135195   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:08.136371   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:10.635267   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:12.639770   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:15.136505   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:17.639880   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:19.640003   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:22.139141   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:24.639228   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:27.133951   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:29.145144   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:31.640181   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:33.640463   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:36.134794   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:38.139108   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:40.139796   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:42.640223   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:44.640706   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:47.141573   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:49.640161   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:52.136263   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:54.136728   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:56.640783   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:38:59.135738   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:01.640465   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:04.136924   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:06.642913   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:09.135281   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:11.141004   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:13.640251   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:15.641595   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:18.137049   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:20.640100   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:22.641141   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:25.135583   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:27.138038   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:29.642277   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:32.137660   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:34.641837   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:37.138875   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:39.641513   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:41.642943   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:44.137958   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:46.140671   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:48.637477   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:51.141039   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:53.143228   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:55.636485   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:39:58.144074   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:00.144496   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:02.636054   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:04.637769   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:06.639630   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:09.136032   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:11.137698   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:13.639899   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:16.139727   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:18.638091   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:20.642197   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:23.144037   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:25.638337   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:27.640295   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:30.139115   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:32.142843   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:34.639853   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:36.642755   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:39.139396   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:41.641183   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:44.139246   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:46.140838   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:48.643112   93004 pod_ready.go:102] pod "calico-node-hlxzw" in "kube-system" namespace has status "Ready":"False"
	I0202 17:40:50.145852   93004 pod_ready.go:81] duration metric: took 4m0.01946698s waiting for pod "calico-node-hlxzw" in "kube-system" namespace to be "Ready" ...
	E0202 17:40:50.145862   93004 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0202 17:40:50.145875   93004 pod_ready.go:38] duration metric: took 8m0.118820095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 17:40:50.172756   93004 out.go:176] 
	W0202 17:40:50.172891   93004 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0202 17:40:50.172906   93004 out.go:241] * 
	* 
	W0202 17:40:50.173916   93004 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0202 17:40:50.246328   93004 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (554.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (326.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:36:00.583326   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:00.588700   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:00.599825   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:00.629620   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:00.671349   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:00.754679   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:00.917067   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:01.237347   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:01.877751   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:03.158439   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:05.718800   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130472481s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:36:10.840909   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:36:10.870752   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:36:21.081407   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129604205s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:36:41.563967   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150903462s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:36:41.985377   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12515959s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150148873s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:37:22.529214   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:37:28.377916   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:28.383002   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:28.394865   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:28.415028   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:28.455180   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:28.539139   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:28.699866   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:29.020428   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:29.660677   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:30.941454   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:32.793322   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:37:33.501686   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:37:38.622018   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136934041s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:37:42.699652   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 17:37:48.862429   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133689627s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:38:09.344102   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:38:30.211588   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131548287s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:38:44.452622   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
E0202 17:38:50.305640   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122184174s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:39:14.256711   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:14.261996   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:14.272098   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:14.292240   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:14.335017   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:14.415529   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:14.576429   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:14.902166   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:15.543846   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:16.824195   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:19.385716   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:39:24.508003   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:34.748966   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135204557s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:39:48.855552   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:39:53.344761   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 17:39:55.235448   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:39:56.038458   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:40:12.228613   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:40:16.642638   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125095867s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:40:36.197187   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148946559s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (326.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (310.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 80 (5m10.000097736s)

                                                
                                                
-- stdout --
	* [kindnet-20220202171134-76172] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20220202171134-76172 in cluster kindnet-20220202171134-76172
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 17:41:11.837803   94203 out.go:297] Setting OutFile to fd 1 ...
	I0202 17:41:11.837926   94203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 17:41:11.837931   94203 out.go:310] Setting ErrFile to fd 2...
	I0202 17:41:11.837935   94203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 17:41:11.838007   94203 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 17:41:11.838327   94203 out.go:304] Setting JSON to false
	I0202 17:41:11.867127   94203 start.go:112] hostinfo: {"hostname":"37309.local","uptime":33046,"bootTime":1643819425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 17:41:11.867229   94203 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0202 17:41:11.893905   94203 out.go:176] * [kindnet-20220202171134-76172] minikube v1.25.1 on Darwin 11.2.3
	I0202 17:41:11.894092   94203 notify.go:174] Checking for updates...
	I0202 17:41:11.941929   94203 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 17:41:11.967780   94203 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 17:41:11.993580   94203 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0202 17:41:12.019712   94203 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 17:41:12.045637   94203 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 17:41:12.046069   94203 config.go:176] Loaded profile config "enable-default-cni-20220202171133-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:41:12.046122   94203 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 17:41:12.147572   94203 docker.go:132] docker version: linux-20.10.6
	I0202 17:41:12.147712   94203 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 17:41:12.337237   94203 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 01:41:12.262635953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 17:41:12.385896   94203 out.go:176] * Using the docker driver based on user configuration
	I0202 17:41:12.386014   94203 start.go:281] selected driver: docker
	I0202 17:41:12.386042   94203 start.go:798] validating driver "docker" against <nil>
	I0202 17:41:12.386072   94203 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0202 17:41:12.389988   94203 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 17:41:12.578606   94203 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 01:41:12.505186892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 17:41:12.578715   94203 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0202 17:41:12.578827   94203 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0202 17:41:12.578845   94203 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0202 17:41:12.578862   94203 cni.go:93] Creating CNI manager for "kindnet"
	I0202 17:41:12.578870   94203 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0202 17:41:12.578875   94203 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0202 17:41:12.578878   94203 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni
	I0202 17:41:12.578893   94203 start_flags.go:302] config:
	{Name:kindnet-20220202171134-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 17:41:12.604781   94203 out.go:176] * Starting control plane node kindnet-20220202171134-76172 in cluster kindnet-20220202171134-76172
	I0202 17:41:12.604838   94203 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 17:41:12.668332   94203 out.go:176] * Pulling base image ...
	I0202 17:41:12.668404   94203 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:41:12.668478   94203 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 17:41:12.668485   94203 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 17:41:12.668508   94203 cache.go:57] Caching tarball of preloaded images
	I0202 17:41:12.668717   94203 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0202 17:41:12.668741   94203 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0202 17:41:12.669896   94203 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/config.json ...
	I0202 17:41:12.670072   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/config.json: {Name:mk10e7df357af338b6ff6742607c5724509d60d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:41:12.801849   94203 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 17:41:12.801877   94203 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 17:41:12.801890   94203 cache.go:208] Successfully downloaded all kic artifacts
	I0202 17:41:12.801935   94203 start.go:313] acquiring machines lock for kindnet-20220202171134-76172: {Name:mk605f7a965b1c6215d283ce5d79bb7e7f7c4b8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 17:41:12.803095   94203 start.go:317] acquired machines lock for "kindnet-20220202171134-76172" in 1.147208ms
	I0202 17:41:12.803129   94203 start.go:89] Provisioning new machine with config: &{Name:kindnet-20220202171134-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 17:41:12.803221   94203 start.go:126] createHost starting for "" (driver="docker")
	I0202 17:41:12.850246   94203 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0202 17:41:12.850539   94203 start.go:160] libmachine.API.Create for "kindnet-20220202171134-76172" (driver="docker")
	I0202 17:41:12.850580   94203 client.go:168] LocalClient.Create starting
	I0202 17:41:12.850743   94203 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem
	I0202 17:41:12.850822   94203 main.go:130] libmachine: Decoding PEM data...
	I0202 17:41:12.850853   94203 main.go:130] libmachine: Parsing certificate...
	I0202 17:41:12.850951   94203 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem
	I0202 17:41:12.851002   94203 main.go:130] libmachine: Decoding PEM data...
	I0202 17:41:12.851031   94203 main.go:130] libmachine: Parsing certificate...
	I0202 17:41:12.851991   94203 cli_runner.go:133] Run: docker network inspect kindnet-20220202171134-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0202 17:41:12.970435   94203 cli_runner.go:180] docker network inspect kindnet-20220202171134-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0202 17:41:12.970575   94203 network_create.go:254] running [docker network inspect kindnet-20220202171134-76172] to gather additional debugging logs...
	I0202 17:41:12.970599   94203 cli_runner.go:133] Run: docker network inspect kindnet-20220202171134-76172
	W0202 17:41:13.091633   94203 cli_runner.go:180] docker network inspect kindnet-20220202171134-76172 returned with exit code 1
	I0202 17:41:13.091657   94203 network_create.go:257] error running [docker network inspect kindnet-20220202171134-76172]: docker network inspect kindnet-20220202171134-76172: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220202171134-76172
	I0202 17:41:13.091677   94203 network_create.go:259] output of [docker network inspect kindnet-20220202171134-76172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220202171134-76172
	
	** /stderr **
	I0202 17:41:13.091776   94203 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0202 17:41:13.210496   94203 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006522e0] misses:0}
	I0202 17:41:13.210536   94203 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:41:13.210557   94203 network_create.go:106] attempt to create docker network kindnet-20220202171134-76172 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0202 17:41:13.210633   94203 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220202171134-76172
	W0202 17:41:13.327117   94203 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220202171134-76172 returned with exit code 1
	W0202 17:41:13.327158   94203 network_create.go:98] failed to create docker network kindnet-20220202171134-76172 192.168.49.0/24, will retry: subnet is taken
	I0202 17:41:13.327371   94203 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006522e0] amended:false}} dirty:map[] misses:0}
	I0202 17:41:13.327389   94203 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:41:13.327586   94203 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006522e0] amended:true}} dirty:map[192.168.49.0:0xc0006522e0 192.168.58.0:0xc0000100c0] misses:0}
	I0202 17:41:13.327603   94203 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:41:13.327609   94203 network_create.go:106] attempt to create docker network kindnet-20220202171134-76172 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0202 17:41:13.327686   94203 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220202171134-76172
	I0202 17:41:19.572506   94203 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220202171134-76172: (6.244620497s)
	I0202 17:41:19.572527   94203 network_create.go:90] docker network kindnet-20220202171134-76172 192.168.58.0/24 created
	I0202 17:41:19.572542   94203 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20220202171134-76172" container
	I0202 17:41:19.572658   94203 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0202 17:41:19.692200   94203 cli_runner.go:133] Run: docker volume create kindnet-20220202171134-76172 --label name.minikube.sigs.k8s.io=kindnet-20220202171134-76172 --label created_by.minikube.sigs.k8s.io=true
	I0202 17:41:19.812281   94203 oci.go:102] Successfully created a docker volume kindnet-20220202171134-76172
	I0202 17:41:19.812405   94203 cli_runner.go:133] Run: docker run --rm --name kindnet-20220202171134-76172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220202171134-76172 --entrypoint /usr/bin/test -v kindnet-20220202171134-76172:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0202 17:41:20.358220   94203 oci.go:106] Successfully prepared a docker volume kindnet-20220202171134-76172
	I0202 17:41:20.358256   94203 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:41:20.358268   94203 kic.go:179] Starting extracting preloaded images to volume ...
	I0202 17:41:20.358377   94203 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220202171134-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0202 17:41:26.741639   94203 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220202171134-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (6.383052045s)
	I0202 17:41:26.741675   94203 kic.go:188] duration metric: took 6.383245 seconds to extract preloaded images to volume
	I0202 17:41:26.741836   94203 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0202 17:41:27.052478   94203 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220202171134-76172 --name kindnet-20220202171134-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220202171134-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220202171134-76172 --network kindnet-20220202171134-76172 --ip 192.168.58.2 --volume kindnet-20220202171134-76172:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0202 17:41:37.866829   94203 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220202171134-76172 --name kindnet-20220202171134-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220202171134-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220202171134-76172 --network kindnet-20220202171134-76172 --ip 192.168.58.2 --volume kindnet-20220202171134-76172:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (10.814002325s)
	I0202 17:41:37.866953   94203 cli_runner.go:133] Run: docker container inspect kindnet-20220202171134-76172 --format={{.State.Running}}
	I0202 17:41:37.995537   94203 cli_runner.go:133] Run: docker container inspect kindnet-20220202171134-76172 --format={{.State.Status}}
	I0202 17:41:38.119211   94203 cli_runner.go:133] Run: docker exec kindnet-20220202171134-76172 stat /var/lib/dpkg/alternatives/iptables
	I0202 17:41:38.304450   94203 oci.go:281] the created container "kindnet-20220202171134-76172" has a running status.
	I0202 17:41:38.304485   94203 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa...
	I0202 17:41:38.436953   94203 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0202 17:41:38.626997   94203 cli_runner.go:133] Run: docker container inspect kindnet-20220202171134-76172 --format={{.State.Status}}
	I0202 17:41:38.755403   94203 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0202 17:41:38.755420   94203 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220202171134-76172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0202 17:41:38.996297   94203 cli_runner.go:133] Run: docker container inspect kindnet-20220202171134-76172 --format={{.State.Status}}
	I0202 17:41:39.125521   94203 machine.go:88] provisioning docker machine ...
	I0202 17:41:39.125562   94203 ubuntu.go:169] provisioning hostname "kindnet-20220202171134-76172"
	I0202 17:41:39.125687   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:39.256386   94203 main.go:130] libmachine: Using SSH client type: native
	I0202 17:41:39.256603   94203 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55367 <nil> <nil>}
	I0202 17:41:39.256622   94203 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20220202171134-76172 && echo "kindnet-20220202171134-76172" | sudo tee /etc/hostname
	I0202 17:41:39.408220   94203 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20220202171134-76172
	
	I0202 17:41:39.408329   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:39.546801   94203 main.go:130] libmachine: Using SSH client type: native
	I0202 17:41:39.546959   94203 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55367 <nil> <nil>}
	I0202 17:41:39.546975   94203 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220202171134-76172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220202171134-76172/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220202171134-76172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0202 17:41:39.702876   94203 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0202 17:41:39.702895   94203 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
	I0202 17:41:39.702919   94203 ubuntu.go:177] setting up certificates
	I0202 17:41:39.702928   94203 provision.go:83] configureAuth start
	I0202 17:41:39.703007   94203 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220202171134-76172
	I0202 17:41:39.825425   94203 provision.go:138] copyHostCerts
	I0202 17:41:39.825541   94203 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
	I0202 17:41:39.825550   94203 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
	I0202 17:41:39.825652   94203 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
	I0202 17:41:39.825848   94203 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
	I0202 17:41:39.825859   94203 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
	I0202 17:41:39.825918   94203 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
	I0202 17:41:39.826058   94203 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
	I0202 17:41:39.826070   94203 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
	I0202 17:41:39.826131   94203 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1679 bytes)
	I0202 17:41:39.826251   94203 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220202171134-76172 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220202171134-76172]
	I0202 17:41:39.882745   94203 provision.go:172] copyRemoteCerts
	I0202 17:41:39.882829   94203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0202 17:41:39.882900   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:40.007530   94203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55367 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa Username:docker}
	I0202 17:41:40.103850   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0202 17:41:40.124331   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0202 17:41:40.144885   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0202 17:41:40.165957   94203 provision.go:86] duration metric: configureAuth took 463.008465ms
	I0202 17:41:40.165971   94203 ubuntu.go:193] setting minikube options for container-runtime
	I0202 17:41:40.166111   94203 config.go:176] Loaded profile config "kindnet-20220202171134-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:41:40.166194   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:40.368046   94203 main.go:130] libmachine: Using SSH client type: native
	I0202 17:41:40.368207   94203 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55367 <nil> <nil>}
	I0202 17:41:40.368224   94203 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0202 17:41:40.511526   94203 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0202 17:41:40.511541   94203 ubuntu.go:71] root file system type: overlay
	I0202 17:41:40.511693   94203 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0202 17:41:40.511769   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:40.638289   94203 main.go:130] libmachine: Using SSH client type: native
	I0202 17:41:40.638435   94203 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55367 <nil> <nil>}
	I0202 17:41:40.638488   94203 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0202 17:41:40.785005   94203 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0202 17:41:40.785111   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:40.906261   94203 main.go:130] libmachine: Using SSH client type: native
	I0202 17:41:40.906426   94203 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 55367 <nil> <nil>}
	I0202 17:41:40.906444   94203 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0202 17:41:52.668081   94203 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-03 01:41:40.783699004 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0202 17:41:52.668102   94203 machine.go:91] provisioned docker machine in 13.542225363s
	I0202 17:41:52.668110   94203 client.go:171] LocalClient.Create took 39.81653784s
	I0202 17:41:52.668127   94203 start.go:168] duration metric: libmachine.API.Create for "kindnet-20220202171134-76172" took 39.816607523s
	I0202 17:41:52.668147   94203 start.go:267] post-start starting for "kindnet-20220202171134-76172" (driver="docker")
	I0202 17:41:52.668157   94203 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0202 17:41:52.668823   94203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0202 17:41:52.668899   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:52.800081   94203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55367 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa Username:docker}
	I0202 17:41:52.897323   94203 ssh_runner.go:195] Run: cat /etc/os-release
	I0202 17:41:52.901562   94203 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0202 17:41:52.901578   94203 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0202 17:41:52.901584   94203 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0202 17:41:52.901590   94203 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0202 17:41:52.901600   94203 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
	I0202 17:41:52.901692   94203 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
	I0202 17:41:52.902199   94203 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem -> 761722.pem in /etc/ssl/certs
	I0202 17:41:52.902375   94203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0202 17:41:52.911193   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem --> /etc/ssl/certs/761722.pem (1708 bytes)
	I0202 17:41:52.930678   94203 start.go:270] post-start completed in 262.510149ms
	I0202 17:41:52.931565   94203 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220202171134-76172
	I0202 17:41:53.054037   94203 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/config.json ...
	I0202 17:41:53.054540   94203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 17:41:53.054606   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:53.177063   94203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55367 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa Username:docker}
	I0202 17:41:53.272119   94203 start.go:129] duration metric: createHost completed in 40.467891932s
	I0202 17:41:53.272139   94203 start.go:80] releasing machines lock for "kindnet-20220202171134-76172", held for 40.468036287s
	I0202 17:41:53.272261   94203 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220202171134-76172
	I0202 17:41:53.404446   94203 ssh_runner.go:195] Run: systemctl --version
	I0202 17:41:53.404536   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:53.405199   94203 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0202 17:41:53.405421   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:53.538562   94203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55367 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa Username:docker}
	I0202 17:41:53.539048   94203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55367 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa Username:docker}
	I0202 17:41:54.102013   94203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0202 17:41:54.112601   94203 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 17:41:54.123094   94203 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0202 17:41:54.123169   94203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0202 17:41:54.133757   94203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0202 17:41:54.148322   94203 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0202 17:41:54.210828   94203 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0202 17:41:54.270555   94203 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 17:41:54.281976   94203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0202 17:41:54.349089   94203 ssh_runner.go:195] Run: sudo systemctl start docker
	I0202 17:41:54.360891   94203 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 17:41:54.401596   94203 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 17:41:54.493288   94203 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0202 17:41:54.493440   94203 cli_runner.go:133] Run: docker exec -t kindnet-20220202171134-76172 dig +short host.docker.internal
	I0202 17:41:54.685150   94203 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0202 17:41:54.686029   94203 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0202 17:41:54.690774   94203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 17:41:54.703953   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:41:54.857994   94203 out.go:176]   - kubelet.housekeeping-interval=5m
	I0202 17:41:54.883703   94203 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0202 17:41:54.883814   94203 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:41:54.884000   94203 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 17:41:54.916068   94203 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 17:41:54.916080   94203 docker.go:537] Images already preloaded, skipping extraction
	I0202 17:41:54.916172   94203 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 17:41:54.946661   94203 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 17:41:54.946686   94203 cache_images.go:84] Images are preloaded, skipping loading
	I0202 17:41:54.946790   94203 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0202 17:41:55.024937   94203 cni.go:93] Creating CNI manager for "kindnet"
	I0202 17:41:55.024969   94203 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0202 17:41:55.024984   94203 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220202171134-76172 NodeName:kindnet-20220202171134-76172 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0202 17:41:55.025090   94203 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220202171134-76172"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0202 17:41:55.025189   94203 kubeadm.go:931] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220202171134-76172 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0202 17:41:55.025257   94203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0202 17:41:55.033507   94203 binaries.go:44] Found k8s binaries, skipping transfer
	I0202 17:41:55.033577   94203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0202 17:41:55.041724   94203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0202 17:41:55.054690   94203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0202 17:41:55.067937   94203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0202 17:41:55.083338   94203 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0202 17:41:55.087994   94203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 17:41:55.101317   94203 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172 for IP: 192.168.58.2
	I0202 17:41:55.101480   94203 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
	I0202 17:41:55.101550   94203 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
	I0202 17:41:55.101609   94203 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/client.key
	I0202 17:41:55.101628   94203 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/client.crt with IP's: []
	I0202 17:41:55.213126   94203 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/client.crt ...
	I0202 17:41:55.213142   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/client.crt: {Name:mkb2c287817ea87909abce222355a87404327747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:41:55.214463   94203 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/client.key ...
	I0202 17:41:55.214472   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/client.key: {Name:mk63e6992a0a4816ef5a6312d41828663258869c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:41:55.214676   94203 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.key.cee25041
	I0202 17:41:55.214694   94203 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0202 17:41:55.324862   94203 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.crt.cee25041 ...
	I0202 17:41:55.324878   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.crt.cee25041: {Name:mk40fdd6b8d1c25ba35ab4ec89b5f08d9ede6c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:41:55.325990   94203 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.key.cee25041 ...
	I0202 17:41:55.325999   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.key.cee25041: {Name:mkfa5538ae2cd9f7ae3ea54f98c6eac8792a6531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:41:55.326519   94203 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.crt
	I0202 17:41:55.326671   94203 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.key
	I0202 17:41:55.326819   94203 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.key
	I0202 17:41:55.326835   94203 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.crt with IP's: []
	I0202 17:41:55.396729   94203 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.crt ...
	I0202 17:41:55.396743   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.crt: {Name:mk587c716d06544675c3594d4e8363c72aefb7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:41:55.398013   94203 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.key ...
	I0202 17:41:55.398023   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.key: {Name:mk87bcc6f7fbd69dd631c181ec12648989b7b433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:41:55.398931   94203 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172.pem (1338 bytes)
	W0202 17:41:55.399003   94203 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172_empty.pem, impossibly tiny 0 bytes
	I0202 17:41:55.399016   94203 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1679 bytes)
	I0202 17:41:55.399052   94203 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
	I0202 17:41:55.399095   94203 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
	I0202 17:41:55.399135   94203 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1679 bytes)
	I0202 17:41:55.399207   94203 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem (1708 bytes)
	I0202 17:41:55.399956   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0202 17:41:55.417748   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0202 17:41:55.434627   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0202 17:41:55.451920   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kindnet-20220202171134-76172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0202 17:41:55.469357   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0202 17:41:55.486283   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0202 17:41:55.505388   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0202 17:41:55.522417   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0202 17:41:55.539397   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0202 17:41:55.556110   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172.pem --> /usr/share/ca-certificates/76172.pem (1338 bytes)
	I0202 17:41:55.574545   94203 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem --> /usr/share/ca-certificates/761722.pem (1708 bytes)
	I0202 17:41:55.593691   94203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0202 17:41:55.608751   94203 ssh_runner.go:195] Run: openssl version
	I0202 17:41:55.614546   94203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76172.pem && ln -fs /usr/share/ca-certificates/76172.pem /etc/ssl/certs/76172.pem"
	I0202 17:41:55.622902   94203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76172.pem
	I0202 17:41:55.626929   94203 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  3 00:25 /usr/share/ca-certificates/76172.pem
	I0202 17:41:55.626972   94203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76172.pem
	I0202 17:41:55.632641   94203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/76172.pem /etc/ssl/certs/51391683.0"
	I0202 17:41:55.641081   94203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/761722.pem && ln -fs /usr/share/ca-certificates/761722.pem /etc/ssl/certs/761722.pem"
	I0202 17:41:55.648770   94203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/761722.pem
	I0202 17:41:55.652987   94203 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  3 00:25 /usr/share/ca-certificates/761722.pem
	I0202 17:41:55.653038   94203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/761722.pem
	I0202 17:41:55.658988   94203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/761722.pem /etc/ssl/certs/3ec20f2e.0"
	I0202 17:41:55.667416   94203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0202 17:41:55.675593   94203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:41:55.679672   94203 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  3 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:41:55.679722   94203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:41:55.685552   94203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0202 17:41:55.693688   94203 kubeadm.go:390] StartCluster: {Name:kindnet-20220202171134-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220202171134-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 17:41:55.693804   94203 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0202 17:41:55.723097   94203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0202 17:41:55.731417   94203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0202 17:41:55.738756   94203 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0202 17:41:55.738805   94203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0202 17:41:55.746381   94203 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0202 17:41:55.746403   94203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0202 17:41:56.266526   94203 out.go:203]   - Generating certificates and keys ...
	I0202 17:41:58.663505   94203 out.go:203]   - Booting up control plane ...
	I0202 17:42:07.198560   94203 out.go:203]   - Configuring RBAC rules ...
	I0202 17:42:07.585088   94203 cni.go:93] Creating CNI manager for "kindnet"
	I0202 17:42:07.610996   94203 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0202 17:42:07.611188   94203 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0202 17:42:07.616859   94203 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
	I0202 17:42:07.616871   94203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0202 17:42:07.636996   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0202 17:42:08.226361   94203 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0202 17:42:08.226437   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:08.226451   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82 minikube.k8s.io/name=kindnet-20220202171134-76172 minikube.k8s.io/updated_at=2022_02_02T17_42_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:08.307064   94203 ops.go:34] apiserver oom_adj: -16
	I0202 17:42:08.307142   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:08.860337   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:09.364780   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:09.865830   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:10.361300   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:10.860007   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:11.360072   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:11.859700   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:12.364995   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:12.859682   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:13.359776   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:13.864272   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:14.362881   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:14.861626   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:15.361648   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:15.865713   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:16.363817   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:16.860511   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:17.368530   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:17.859826   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:18.360348   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:18.860156   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:19.360450   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:19.860477   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:20.359900   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:20.859862   94203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:42:20.916121   94203 kubeadm.go:1007] duration metric: took 12.689438301s to wait for elevateKubeSystemPrivileges.
	I0202 17:42:20.916139   94203 kubeadm.go:392] StartCluster complete in 25.22183291s
	I0202 17:42:20.916155   94203 settings.go:142] acquiring lock: {Name:mkea0cd61827c3e8cfbcf6e420c5dbfe453193c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:42:20.916245   94203 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 17:42:20.916860   94203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mk472bf8b440ca08b271324870e056290a1de0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:42:21.442503   94203 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220202171134-76172" rescaled to 1
	I0202 17:42:21.442534   94203 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 17:42:21.442546   94203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0202 17:42:21.442560   94203 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0202 17:42:21.486337   94203 out.go:176] * Verifying Kubernetes components...
	I0202 17:42:21.442710   94203 config.go:176] Loaded profile config "kindnet-20220202171134-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:42:21.486414   94203 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220202171134-76172"
	I0202 17:42:21.486414   94203 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220202171134-76172"
	I0202 17:42:21.486420   94203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 17:42:21.486431   94203 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220202171134-76172"
	I0202 17:42:21.486434   94203 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220202171134-76172"
	W0202 17:42:21.486440   94203 addons.go:165] addon storage-provisioner should already be in state true
	I0202 17:42:21.486461   94203 host.go:66] Checking if "kindnet-20220202171134-76172" exists ...
	I0202 17:42:21.486746   94203 cli_runner.go:133] Run: docker container inspect kindnet-20220202171134-76172 --format={{.State.Status}}
	I0202 17:42:21.486876   94203 cli_runner.go:133] Run: docker container inspect kindnet-20220202171134-76172 --format={{.State.Status}}
	I0202 17:42:21.496494   94203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0202 17:42:21.506474   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:42:21.649282   94203 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220202171134-76172"
	I0202 17:42:21.672077   94203 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0202 17:42:21.672115   94203 addons.go:165] addon default-storageclass should already be in state true
	I0202 17:42:21.672157   94203 host.go:66] Checking if "kindnet-20220202171134-76172" exists ...
	I0202 17:42:21.672303   94203 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 17:42:21.672322   94203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0202 17:42:21.672471   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:42:21.673443   94203 cli_runner.go:133] Run: docker container inspect kindnet-20220202171134-76172 --format={{.State.Status}}
	I0202 17:42:21.685041   94203 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220202171134-76172" to be "Ready" ...
	I0202 17:42:21.725438   94203 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0202 17:42:21.835053   94203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55367 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa Username:docker}
	I0202 17:42:21.835120   94203 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0202 17:42:21.835130   94203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0202 17:42:21.835206   94203 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220202171134-76172
	I0202 17:42:21.940648   94203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 17:42:21.960078   94203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55367 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kindnet-20220202171134-76172/id_rsa Username:docker}
	I0202 17:42:22.099054   94203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0202 17:42:22.366245   94203 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0202 17:42:22.366260   94203 addons.go:417] enableAddons completed in 923.68525ms
	I0202 17:42:23.696367   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:25.696546   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:28.202005   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:30.202384   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:32.696319   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:34.703299   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:37.196201   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:39.196598   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:41.203307   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:43.696663   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:45.697232   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:47.697442   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:50.201542   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:52.696823   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:55.198014   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:57.198991   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:42:59.199379   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:01.199901   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:03.699237   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:06.197986   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:08.198864   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:10.201975   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:12.696704   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:14.703860   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:17.200147   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:19.697810   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:21.698235   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:24.199207   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:26.200357   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:28.201213   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:30.698473   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:32.702460   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:35.198702   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:37.199667   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:39.202160   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:41.203943   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:43.208693   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:45.699941   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:48.202482   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:50.701467   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:53.201059   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:55.703621   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:43:58.198616   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:00.203394   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:02.699306   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:05.204491   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:07.206014   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:09.703138   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:12.201866   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:14.700833   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:17.200543   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:19.203367   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:21.204772   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:23.700265   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:26.202011   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:28.202338   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:30.701374   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:33.200886   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:35.700081   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:37.710351   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:40.201337   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:42.699764   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:44.702384   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:47.206091   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:49.209637   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:51.700079   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:53.702653   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:55.708559   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:44:58.199720   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:00.700849   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:03.204624   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:05.705457   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:08.201648   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:10.703135   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:13.199961   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:15.208804   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:17.211423   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:19.709408   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:22.202170   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:24.700638   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:26.703074   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:29.208502   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:31.209762   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:33.702734   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:36.207242   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:38.701858   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:40.706513   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:43.201513   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:45.202349   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:47.207154   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:49.702675   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:52.209382   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:54.702803   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:56.706285   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:45:59.208960   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:01.707007   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:04.203162   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:06.207724   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:08.703197   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:10.711964   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:13.203356   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:15.204585   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:17.703793   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:19.704706   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:21.706094   94203 node_ready.go:58] node "kindnet-20220202171134-76172" has status "Ready":"False"
	I0202 17:46:21.706105   94203 node_ready.go:38] duration metric: took 4m0.014937224s waiting for node "kindnet-20220202171134-76172" to be "Ready" ...
	I0202 17:46:21.733047   94203 out.go:176] 
	W0202 17:46:21.733139   94203 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0202 17:46:21.733147   94203 out.go:241] * 
	* 
	W0202 17:46:21.733719   94203 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0202 17:46:21.781592   94203 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (310.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (367.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133968342s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:43:30.226691   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151203611s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122878817s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:44:05.821413   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 17:44:14.264998   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149199275s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127644242s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:44:41.980133   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:44:48.864534   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:44:56.043425   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129674814s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150144427s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:45:39.045388   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:39.053057   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:39.067716   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:39.092941   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:39.135979   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:39.216641   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:39.378617   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:39.699294   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:40.341449   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:41.624318   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:44.190115   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151297853s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:45:49.317205   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:45:59.558364   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:46:00.598506   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/false-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156996878s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:46:20.042256   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:46:42.003863   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146868858s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:47:01.011854   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:47:28.389716   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:47:42.715549   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135040678s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0202 17:47:59.165725   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:48:22.934899   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/enable-default-cni-20220202171133-76172/client.crt: no such file or directory
E0202 17:48:30.233226   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
E0202 17:49:14.272065   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159498378s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (367.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (7200.592s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : signal: killed (4m50.725907017s)

                                                
                                                
-- stdout --
	* [kubenet-20220202171133-76172] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kubenet-20220202171133-76172 in cluster kubenet-20220202171133-76172
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 17:46:43.114848   95060 out.go:297] Setting OutFile to fd 1 ...
	I0202 17:46:43.114980   95060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 17:46:43.114986   95060 out.go:310] Setting ErrFile to fd 2...
	I0202 17:46:43.114989   95060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 17:46:43.115065   95060 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 17:46:43.115379   95060 out.go:304] Setting JSON to false
	I0202 17:46:43.141066   95060 start.go:112] hostinfo: {"hostname":"37309.local","uptime":33378,"bootTime":1643819425,"procs":366,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 17:46:43.141152   95060 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0202 17:46:43.167182   95060 out.go:176] * [kubenet-20220202171133-76172] minikube v1.25.1 on Darwin 11.2.3
	I0202 17:46:43.167380   95060 notify.go:174] Checking for updates...
	I0202 17:46:43.214794   95060 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 17:46:43.240681   95060 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 17:46:43.266698   95060 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0202 17:46:43.292478   95060 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 17:46:43.316602   95060 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 17:46:43.317071   95060 config.go:176] Loaded profile config "bridge-20220202171133-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:46:43.317120   95060 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 17:46:43.417785   95060 docker.go:132] docker version: linux-20.10.6
	I0202 17:46:43.417906   95060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 17:46:43.610900   95060 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 01:46:43.538186929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 17:46:43.658330   95060 out.go:176] * Using the docker driver based on user configuration
	I0202 17:46:43.658396   95060 start.go:281] selected driver: docker
	I0202 17:46:43.658408   95060 start.go:798] validating driver "docker" against <nil>
	I0202 17:46:43.658428   95060 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0202 17:46:43.661897   95060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 17:46:43.850739   95060 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 01:46:43.778938917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 17:46:43.850864   95060 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0202 17:46:43.850990   95060 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0202 17:46:43.851007   95060 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0202 17:46:43.851024   95060 cni.go:89] network plugin configured as "kubenet", returning disabled
	I0202 17:46:43.851031   95060 start_flags.go:302] config:
	{Name:kubenet-20220202171133-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kubenet-20220202171133-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 17:46:43.878025   95060 out.go:176] * Starting control plane node kubenet-20220202171133-76172 in cluster kubenet-20220202171133-76172
	I0202 17:46:43.878168   95060 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 17:46:43.904655   95060 out.go:176] * Pulling base image ...
	I0202 17:46:43.904711   95060 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:46:43.904806   95060 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 17:46:43.904864   95060 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 17:46:43.904914   95060 cache.go:57] Caching tarball of preloaded images
	I0202 17:46:43.906074   95060 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0202 17:46:43.906170   95060 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0202 17:46:43.906616   95060 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/config.json ...
	I0202 17:46:43.906853   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/config.json: {Name:mk504ba8cf12472eaac12a6d94e8a96f0c839583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:46:44.056123   95060 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 17:46:44.056156   95060 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 17:46:44.056189   95060 cache.go:208] Successfully downloaded all kic artifacts
	I0202 17:46:44.056231   95060 start.go:313] acquiring machines lock for kubenet-20220202171133-76172: {Name:mkdd37e4caefb87b4d26080bf19a1e80d4801793 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 17:46:44.056788   95060 start.go:317] acquired machines lock for "kubenet-20220202171133-76172" in 544.226µs
	I0202 17:46:44.056819   95060 start.go:89] Provisioning new machine with config: &{Name:kubenet-20220202171133-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kubenet-20220202171133-76172 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 17:46:44.056881   95060 start.go:126] createHost starting for "" (driver="docker")
	I0202 17:46:44.082623   95060 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0202 17:46:44.082812   95060 start.go:160] libmachine.API.Create for "kubenet-20220202171133-76172" (driver="docker")
	I0202 17:46:44.082837   95060 client.go:168] LocalClient.Create starting
	I0202 17:46:44.082914   95060 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem
	I0202 17:46:44.082954   95060 main.go:130] libmachine: Decoding PEM data...
	I0202 17:46:44.082969   95060 main.go:130] libmachine: Parsing certificate...
	I0202 17:46:44.083031   95060 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem
	I0202 17:46:44.083058   95060 main.go:130] libmachine: Decoding PEM data...
	I0202 17:46:44.083066   95060 main.go:130] libmachine: Parsing certificate...
	I0202 17:46:44.103798   95060 cli_runner.go:133] Run: docker network inspect kubenet-20220202171133-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0202 17:46:44.224732   95060 cli_runner.go:180] docker network inspect kubenet-20220202171133-76172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0202 17:46:44.224843   95060 network_create.go:254] running [docker network inspect kubenet-20220202171133-76172] to gather additional debugging logs...
	I0202 17:46:44.224859   95060 cli_runner.go:133] Run: docker network inspect kubenet-20220202171133-76172
	W0202 17:46:44.347526   95060 cli_runner.go:180] docker network inspect kubenet-20220202171133-76172 returned with exit code 1
	I0202 17:46:44.347553   95060 network_create.go:257] error running [docker network inspect kubenet-20220202171133-76172]: docker network inspect kubenet-20220202171133-76172: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220202171133-76172
	I0202 17:46:44.347577   95060 network_create.go:259] output of [docker network inspect kubenet-20220202171133-76172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220202171133-76172
	
	** /stderr **
	I0202 17:46:44.347676   95060 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0202 17:46:44.464898   95060 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000614468] misses:0}
	I0202 17:46:44.464943   95060 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:46:44.464968   95060 network_create.go:106] attempt to create docker network kubenet-20220202171133-76172 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0202 17:46:44.465064   95060 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220202171133-76172
	W0202 17:46:44.584396   95060 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220202171133-76172 returned with exit code 1
	W0202 17:46:44.584451   95060 network_create.go:98] failed to create docker network kubenet-20220202171133-76172 192.168.49.0/24, will retry: subnet is taken
	I0202 17:46:44.584692   95060 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000614468] amended:false}} dirty:map[] misses:0}
	I0202 17:46:44.584712   95060 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:46:44.584906   95060 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000614468] amended:true}} dirty:map[192.168.49.0:0xc000614468 192.168.58.0:0xc0006aa140] misses:0}
	I0202 17:46:44.584926   95060 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 17:46:44.584932   95060 network_create.go:106] attempt to create docker network kubenet-20220202171133-76172 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0202 17:46:44.585008   95060 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220202171133-76172
	I0202 17:46:50.437136   95060 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220202171133-76172: (5.851943855s)
	I0202 17:46:50.437158   95060 network_create.go:90] docker network kubenet-20220202171133-76172 192.168.58.0/24 created
	I0202 17:46:50.437172   95060 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20220202171133-76172" container
	I0202 17:46:50.437273   95060 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0202 17:46:50.556275   95060 cli_runner.go:133] Run: docker volume create kubenet-20220202171133-76172 --label name.minikube.sigs.k8s.io=kubenet-20220202171133-76172 --label created_by.minikube.sigs.k8s.io=true
	I0202 17:46:50.673927   95060 oci.go:102] Successfully created a docker volume kubenet-20220202171133-76172
	I0202 17:46:50.674050   95060 cli_runner.go:133] Run: docker run --rm --name kubenet-20220202171133-76172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220202171133-76172 --entrypoint /usr/bin/test -v kubenet-20220202171133-76172:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0202 17:46:51.179966   95060 oci.go:106] Successfully prepared a docker volume kubenet-20220202171133-76172
	I0202 17:46:51.180031   95060 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:46:51.180043   95060 kic.go:179] Starting extracting preloaded images to volume ...
	I0202 17:46:51.180201   95060 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220202171133-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0202 17:46:56.371940   95060 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220202171133-76172:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (5.19152398s)
	I0202 17:46:56.371963   95060 kic.go:188] duration metric: took 5.191785 seconds to extract preloaded images to volume
	I0202 17:46:56.372088   95060 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0202 17:46:56.571935   95060 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220202171133-76172 --name kubenet-20220202171133-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220202171133-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220202171133-76172 --network kubenet-20220202171133-76172 --ip 192.168.58.2 --volume kubenet-20220202171133-76172:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0202 17:47:07.777594   95060 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220202171133-76172 --name kubenet-20220202171133-76172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220202171133-76172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220202171133-76172 --network kubenet-20220202171133-76172 --ip 192.168.58.2 --volume kubenet-20220202171133-76172:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (11.205306606s)
	I0202 17:47:07.777720   95060 cli_runner.go:133] Run: docker container inspect kubenet-20220202171133-76172 --format={{.State.Running}}
	I0202 17:47:07.910290   95060 cli_runner.go:133] Run: docker container inspect kubenet-20220202171133-76172 --format={{.State.Status}}
	I0202 17:47:08.034600   95060 cli_runner.go:133] Run: docker exec kubenet-20220202171133-76172 stat /var/lib/dpkg/alternatives/iptables
	I0202 17:47:08.213468   95060 oci.go:281] the created container "kubenet-20220202171133-76172" has a running status.
	I0202 17:47:08.213498   95060 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa...
	I0202 17:47:08.378732   95060 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0202 17:47:08.569055   95060 cli_runner.go:133] Run: docker container inspect kubenet-20220202171133-76172 --format={{.State.Status}}
	I0202 17:47:08.694563   95060 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0202 17:47:08.694584   95060 kic_runner.go:114] Args: [docker exec --privileged kubenet-20220202171133-76172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0202 17:47:08.876898   95060 cli_runner.go:133] Run: docker container inspect kubenet-20220202171133-76172 --format={{.State.Status}}
	I0202 17:47:09.042010   95060 machine.go:88] provisioning docker machine ...
	I0202 17:47:09.042059   95060 ubuntu.go:169] provisioning hostname "kubenet-20220202171133-76172"
	I0202 17:47:09.042159   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:09.165354   95060 main.go:130] libmachine: Using SSH client type: native
	I0202 17:47:09.165567   95060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 56480 <nil> <nil>}
	I0202 17:47:09.165581   95060 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubenet-20220202171133-76172 && echo "kubenet-20220202171133-76172" | sudo tee /etc/hostname
	I0202 17:47:09.167001   95060 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0202 17:47:12.320955   95060 main.go:130] libmachine: SSH cmd err, output: <nil>: kubenet-20220202171133-76172
	
	I0202 17:47:12.321754   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:12.441949   95060 main.go:130] libmachine: Using SSH client type: native
	I0202 17:47:12.442119   95060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 56480 <nil> <nil>}
	I0202 17:47:12.442134   95060 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-20220202171133-76172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20220202171133-76172/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-20220202171133-76172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0202 17:47:12.578771   95060 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0202 17:47:12.578803   95060 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
	I0202 17:47:12.578836   95060 ubuntu.go:177] setting up certificates
	I0202 17:47:12.578849   95060 provision.go:83] configureAuth start
	I0202 17:47:12.578951   95060 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220202171133-76172
	I0202 17:47:12.701527   95060 provision.go:138] copyHostCerts
	I0202 17:47:12.701623   95060 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
	I0202 17:47:12.701631   95060 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
	I0202 17:47:12.702629   95060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
	I0202 17:47:12.702812   95060 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
	I0202 17:47:12.702827   95060 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
	I0202 17:47:12.702888   95060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
	I0202 17:47:12.703037   95060 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
	I0202 17:47:12.703043   95060 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
	I0202 17:47:12.703103   95060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1679 bytes)
	I0202 17:47:12.703213   95060 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.kubenet-20220202171133-76172 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20220202171133-76172]
	I0202 17:47:12.885517   95060 provision.go:172] copyRemoteCerts
	I0202 17:47:12.885977   95060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0202 17:47:12.886081   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:13.007827   95060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56480 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa Username:docker}
	I0202 17:47:13.102069   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0202 17:47:13.129661   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0202 17:47:13.146520   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0202 17:47:13.164036   95060 provision.go:86] duration metric: configureAuth took 585.159154ms
	I0202 17:47:13.164049   95060 ubuntu.go:193] setting minikube options for container-runtime
	I0202 17:47:13.164214   95060 config.go:176] Loaded profile config "kubenet-20220202171133-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:47:13.164287   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:13.283964   95060 main.go:130] libmachine: Using SSH client type: native
	I0202 17:47:13.284180   95060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 56480 <nil> <nil>}
	I0202 17:47:13.284191   95060 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0202 17:47:13.422681   95060 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0202 17:47:13.422697   95060 ubuntu.go:71] root file system type: overlay
	I0202 17:47:13.422874   95060 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0202 17:47:13.422976   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:13.543127   95060 main.go:130] libmachine: Using SSH client type: native
	I0202 17:47:13.543293   95060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 56480 <nil> <nil>}
	I0202 17:47:13.543342   95060 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0202 17:47:13.687146   95060 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0202 17:47:13.687246   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:13.806648   95060 main.go:130] libmachine: Using SSH client type: native
	I0202 17:47:13.806806   95060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 56480 <nil> <nil>}
	I0202 17:47:13.806819   95060 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0202 17:47:41.987889   95060 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-03 01:47:13.686933015 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0202 17:47:41.987918   95060 machine.go:91] provisioned docker machine in 32.9450643s
	I0202 17:47:41.987925   95060 client.go:171] LocalClient.Create took 57.903631155s
	I0202 17:47:41.987943   95060 start.go:168] duration metric: libmachine.API.Create for "kubenet-20220202171133-76172" took 57.903677668s
	I0202 17:47:41.987955   95060 start.go:267] post-start starting for "kubenet-20220202171133-76172" (driver="docker")
	I0202 17:47:41.987959   95060 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0202 17:47:41.988045   95060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0202 17:47:41.988119   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:42.111059   95060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56480 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa Username:docker}
	I0202 17:47:42.207814   95060 ssh_runner.go:195] Run: cat /etc/os-release
	I0202 17:47:42.211896   95060 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0202 17:47:42.211919   95060 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0202 17:47:42.211927   95060 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0202 17:47:42.211945   95060 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0202 17:47:42.211962   95060 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
	I0202 17:47:42.212057   95060 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
	I0202 17:47:42.212650   95060 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem -> 761722.pem in /etc/ssl/certs
	I0202 17:47:42.212834   95060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0202 17:47:42.220521   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem --> /etc/ssl/certs/761722.pem (1708 bytes)
	I0202 17:47:42.237234   95060 start.go:270] post-start completed in 249.265125ms
	I0202 17:47:42.237737   95060 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220202171133-76172
	I0202 17:47:42.357094   95060 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/config.json ...
	I0202 17:47:42.357507   95060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 17:47:42.357569   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:42.478464   95060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56480 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa Username:docker}
	I0202 17:47:42.570126   95060 start.go:129] duration metric: createHost completed in 58.511768722s
	I0202 17:47:42.570149   95060 start.go:80] releasing machines lock for "kubenet-20220202171133-76172", held for 58.511882939s
	I0202 17:47:42.570261   95060 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220202171133-76172
	I0202 17:47:42.693764   95060 ssh_runner.go:195] Run: systemctl --version
	I0202 17:47:42.693845   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:42.694511   95060 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0202 17:47:42.694708   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:42.824571   95060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56480 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa Username:docker}
	I0202 17:47:42.824655   95060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56480 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa Username:docker}
	I0202 17:47:43.386752   95060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0202 17:47:43.397521   95060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 17:47:43.406992   95060 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0202 17:47:43.407073   95060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0202 17:47:43.416110   95060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0202 17:47:43.428707   95060 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0202 17:47:43.487716   95060 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0202 17:47:43.546499   95060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 17:47:43.557098   95060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0202 17:47:43.611887   95060 ssh_runner.go:195] Run: sudo systemctl start docker
	I0202 17:47:43.621511   95060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 17:47:43.659576   95060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 17:47:43.744053   95060 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0202 17:47:43.744221   95060 cli_runner.go:133] Run: docker exec -t kubenet-20220202171133-76172 dig +short host.docker.internal
	I0202 17:47:43.926356   95060 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0202 17:47:43.927715   95060 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0202 17:47:43.932299   95060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 17:47:43.941939   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:47:44.117129   95060 out.go:176]   - kubelet.housekeeping-interval=5m
	I0202 17:47:44.117227   95060 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 17:47:44.117338   95060 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 17:47:44.147984   95060 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 17:47:44.147996   95060 docker.go:537] Images already preloaded, skipping extraction
	I0202 17:47:44.148082   95060 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 17:47:44.178896   95060 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 17:47:44.178911   95060 cache_images.go:84] Images are preloaded, skipping loading
	I0202 17:47:44.179015   95060 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0202 17:47:44.260896   95060 cni.go:89] network plugin configured as "kubenet", returning disabled
	I0202 17:47:44.260916   95060 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0202 17:47:44.260932   95060 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20220202171133-76172 NodeName:kubenet-20220202171133-76172 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0202 17:47:44.261028   95060 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubenet-20220202171133-76172"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0202 17:47:44.261101   95060 kubeadm.go:931] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubenet-20220202171133-76172 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.58.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:kubenet-20220202171133-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0202 17:47:44.261166   95060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0202 17:47:44.268399   95060 binaries.go:44] Found k8s binaries, skipping transfer
	I0202 17:47:44.268453   95060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0202 17:47:44.275515   95060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0202 17:47:44.289132   95060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0202 17:47:44.304076   95060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0202 17:47:44.320815   95060 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0202 17:47:44.325134   95060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 17:47:44.335659   95060 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172 for IP: 192.168.58.2
	I0202 17:47:44.335793   95060 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
	I0202 17:47:44.335858   95060 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
	I0202 17:47:44.335906   95060 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/client.key
	I0202 17:47:44.335923   95060 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/client.crt with IP's: []
	I0202 17:47:44.396987   95060 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/client.crt ...
	I0202 17:47:44.397004   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/client.crt: {Name:mk7e8d3e864d06e0e820025428bd6e6ff1165720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:47:44.398804   95060 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/client.key ...
	I0202 17:47:44.398814   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/client.key: {Name:mk5046faba6bfafdf995e5a37d74be38dd6ad99d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:47:44.399391   95060 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.key.cee25041
	I0202 17:47:44.399409   95060 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0202 17:47:44.458135   95060 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.crt.cee25041 ...
	I0202 17:47:44.458149   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.crt.cee25041: {Name:mk2e820ad67855788563b8a313a65d811f950a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:47:44.459467   95060 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.key.cee25041 ...
	I0202 17:47:44.459490   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.key.cee25041: {Name:mk9e8a6b765f1b9a73757fa2ea4fa2d3f9443176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:47:44.460464   95060 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.crt
	I0202 17:47:44.460671   95060 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.key
	I0202 17:47:44.460856   95060 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.key
	I0202 17:47:44.460874   95060 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.crt with IP's: []
	I0202 17:47:44.545208   95060 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.crt ...
	I0202 17:47:44.545224   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.crt: {Name:mk92b70afed05d16131dc040be35d4f3b84d4856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:47:44.546752   95060 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.key ...
	I0202 17:47:44.546773   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.key: {Name:mkef9d829f30ccf67e387bc3a9dc17a8dd5980f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:47:44.547670   95060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172.pem (1338 bytes)
	W0202 17:47:44.547721   95060 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172_empty.pem, impossibly tiny 0 bytes
	I0202 17:47:44.547736   95060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1679 bytes)
	I0202 17:47:44.547771   95060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
	I0202 17:47:44.547806   95060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
	I0202 17:47:44.547837   95060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1679 bytes)
	I0202 17:47:44.547915   95060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem (1708 bytes)
	I0202 17:47:44.548680   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0202 17:47:44.566684   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0202 17:47:44.584488   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0202 17:47:44.602015   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/kubenet-20220202171133-76172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0202 17:47:44.618786   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0202 17:47:44.635650   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0202 17:47:44.652299   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0202 17:47:44.669185   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0202 17:47:44.685803   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0202 17:47:44.702933   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/76172.pem --> /usr/share/ca-certificates/76172.pem (1338 bytes)
	I0202 17:47:44.719748   95060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/761722.pem --> /usr/share/ca-certificates/761722.pem (1708 bytes)
	I0202 17:47:44.736495   95060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0202 17:47:44.750528   95060 ssh_runner.go:195] Run: openssl version
	I0202 17:47:44.756285   95060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0202 17:47:44.764478   95060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:47:44.769477   95060 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  3 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:47:44.769530   95060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0202 17:47:44.774882   95060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0202 17:47:44.782587   95060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76172.pem && ln -fs /usr/share/ca-certificates/76172.pem /etc/ssl/certs/76172.pem"
	I0202 17:47:44.791156   95060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76172.pem
	I0202 17:47:44.795228   95060 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  3 00:25 /usr/share/ca-certificates/76172.pem
	I0202 17:47:44.795281   95060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76172.pem
	I0202 17:47:44.801075   95060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/76172.pem /etc/ssl/certs/51391683.0"
	I0202 17:47:44.809090   95060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/761722.pem && ln -fs /usr/share/ca-certificates/761722.pem /etc/ssl/certs/761722.pem"
	I0202 17:47:44.817718   95060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/761722.pem
	I0202 17:47:44.821591   95060 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  3 00:25 /usr/share/ca-certificates/761722.pem
	I0202 17:47:44.821638   95060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/761722.pem
	I0202 17:47:44.827374   95060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/761722.pem /etc/ssl/certs/3ec20f2e.0"
	I0202 17:47:44.835211   95060 kubeadm.go:390] StartCluster: {Name:kubenet-20220202171133-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kubenet-20220202171133-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse}
	I0202 17:47:44.835336   95060 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0202 17:47:44.864795   95060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0202 17:47:44.872934   95060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0202 17:47:44.880244   95060 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0202 17:47:44.880304   95060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0202 17:47:44.887755   95060 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0202 17:47:44.887772   95060 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0202 17:47:45.406314   95060 out.go:203]   - Generating certificates and keys ...
	I0202 17:47:49.516915   95060 out.go:203]   - Booting up control plane ...
	I0202 17:48:04.547211   95060 out.go:203]   - Configuring RBAC rules ...
	I0202 17:48:04.931021   95060 cni.go:89] network plugin configured as "kubenet", returning disabled
	I0202 17:48:04.931053   95060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0202 17:48:04.931148   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82 minikube.k8s.io/name=kubenet-20220202171133-76172 minikube.k8s.io/updated_at=2022_02_02T17_48_04_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:04.931152   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:04.946235   95060 ops.go:34] apiserver oom_adj: -16
	I0202 17:48:04.980815   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:05.597182   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:06.098574   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:06.597032   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:07.097057   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:07.602768   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:08.098954   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:08.597502   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:09.097511   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:09.601107   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:10.097182   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:10.598344   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:11.098632   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:11.598085   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:12.103173   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:12.597934   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:13.099279   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:13.600601   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:14.100872   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:14.597403   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:15.098204   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:15.597357   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:16.097691   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:16.597451   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:17.098369   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:17.597581   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:18.097400   95060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 17:48:18.164888   95060 kubeadm.go:1007] duration metric: took 13.233494079s to wait for elevateKubeSystemPrivileges.
	I0202 17:48:18.164908   95060 kubeadm.go:392] StartCluster complete in 33.328877664s
	I0202 17:48:18.164925   95060 settings.go:142] acquiring lock: {Name:mkea0cd61827c3e8cfbcf6e420c5dbfe453193c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:48:18.165016   95060 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 17:48:18.166039   95060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mk472bf8b440ca08b271324870e056290a1de0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 17:48:18.691663   95060 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20220202171133-76172" rescaled to 1
	I0202 17:48:18.691700   95060 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 17:48:18.691723   95060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0202 17:48:18.720929   95060 out.go:176] * Verifying Kubernetes components...
	I0202 17:48:18.691742   95060 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0202 17:48:18.691890   95060 config.go:176] Loaded profile config "kubenet-20220202171133-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 17:48:18.721041   95060 addons.go:65] Setting storage-provisioner=true in profile "kubenet-20220202171133-76172"
	I0202 17:48:18.721067   95060 addons.go:153] Setting addon storage-provisioner=true in "kubenet-20220202171133-76172"
	I0202 17:48:18.721068   95060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 17:48:18.721067   95060 addons.go:65] Setting default-storageclass=true in profile "kubenet-20220202171133-76172"
	W0202 17:48:18.721075   95060 addons.go:165] addon storage-provisioner should already be in state true
	I0202 17:48:18.721103   95060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20220202171133-76172"
	I0202 17:48:18.721124   95060 host.go:66] Checking if "kubenet-20220202171133-76172" exists ...
	I0202 17:48:18.721667   95060 cli_runner.go:133] Run: docker container inspect kubenet-20220202171133-76172 --format={{.State.Status}}
	I0202 17:48:18.721907   95060 cli_runner.go:133] Run: docker container inspect kubenet-20220202171133-76172 --format={{.State.Status}}
	I0202 17:48:18.740279   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:48:18.805045   95060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0202 17:48:18.912242   95060 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0202 17:48:18.912482   95060 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 17:48:18.912503   95060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0202 17:48:18.912655   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:48:18.922866   95060 node_ready.go:35] waiting up to 5m0s for node "kubenet-20220202171133-76172" to be "Ready" ...
	I0202 17:48:18.943985   95060 addons.go:153] Setting addon default-storageclass=true in "kubenet-20220202171133-76172"
	W0202 17:48:18.943999   95060 addons.go:165] addon default-storageclass should already be in state true
	I0202 17:48:18.944017   95060 host.go:66] Checking if "kubenet-20220202171133-76172" exists ...
	I0202 17:48:18.944218   95060 node_ready.go:49] node "kubenet-20220202171133-76172" has status "Ready":"True"
	I0202 17:48:18.944226   95060 node_ready.go:38] duration metric: took 21.334181ms waiting for node "kubenet-20220202171133-76172" to be "Ready" ...
	I0202 17:48:18.944232   95060 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 17:48:18.975811   95060 cli_runner.go:133] Run: docker container inspect kubenet-20220202171133-76172 --format={{.State.Status}}
	I0202 17:48:19.058527   95060 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-9756g" in "kube-system" namespace to be "Ready" ...
	I0202 17:48:19.127861   95060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56480 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa Username:docker}
	I0202 17:48:19.127936   95060 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0202 17:48:19.127946   95060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0202 17:48:19.128032   95060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220202171133-76172
	I0202 17:48:19.265746   95060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56480 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/kubenet-20220202171133-76172/id_rsa Username:docker}
	I0202 17:48:19.360796   95060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 17:48:19.465270   95060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0202 17:48:20.041305   95060 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.236192888s)
	I0202 17:48:20.041329   95060 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0202 17:48:20.171527   95060 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0202 17:48:20.171546   95060 addons.go:417] enableAddons completed in 1.479779615s
	I0202 17:48:21.086115   95060 pod_ready.go:92] pod "coredns-64897985d-9756g" in "kube-system" namespace has status "Ready":"True"
	I0202 17:48:21.086128   95060 pod_ready.go:81] duration metric: took 2.02752181s waiting for pod "coredns-64897985d-9756g" in "kube-system" namespace to be "Ready" ...
	I0202 17:48:21.086138   95060 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-lchhn" in "kube-system" namespace to be "Ready" ...
	I0202 17:48:23.098870   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:25.599292   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:28.098159   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:30.604993   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:33.097219   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:35.597732   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:37.601764   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:40.101198   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:42.601990   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:45.097512   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:47.098023   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:49.102879   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:51.602190   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:53.608394   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:56.099062   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:48:58.603111   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:01.097989   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:03.118014   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:05.602706   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:07.605364   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:10.098414   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:12.103864   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:14.601074   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:16.602397   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:19.104601   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:21.606559   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:24.108074   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:26.600246   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:28.606766   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:31.105053   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:33.603521   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:35.605811   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:38.102410   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:40.600078   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:42.603782   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:44.604053   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:47.099445   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:49.101114   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:51.600079   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:53.601988   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:56.103077   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:49:58.599684   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:00.600005   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:03.111909   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:05.600838   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:07.607926   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:10.099756   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:12.100051   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:14.102244   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:16.599778   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:18.601325   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:20.604769   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:23.136185   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:25.603784   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:28.102020   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:30.600218   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:32.601546   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:34.604852   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:37.101822   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:39.102698   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:41.600954   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:43.603717   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:46.100148   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:48.102561   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:50.601277   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:53.102456   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:55.603210   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:50:58.102766   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:00.604392   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:03.102251   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:05.602781   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:07.609467   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:10.104956   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:12.602338   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:15.101002   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:17.105009   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:19.602280   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:21.602722   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:23.603889   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:26.105586   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:28.605528   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:31.106590   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"
	I0202 17:51:33.114221   95060 pod_ready.go:102] pod "coredns-64897985d-lchhn" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
net_test.go:101: failed start: signal: killed
--- FAIL: TestNetworkPlugins/group/kubenet/Start (290.74s)
E0202 18:12:17.362729   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 18:12:28.396446   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
panic: test timed out after 2h0m0s

                                                
                                                
goroutine 3262 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:1788 +0x8e
created by time.goFunc
	/usr/local/go/src/time/sleep.go:180 +0x31

                                                
                                                
goroutine 1 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1225 +0x311
testing.tRunner(0xc0005c3ba0, 0xc0006b3c48)
	/usr/local/go/src/testing/testing.go:1265 +0x13b
testing.runTests(0xc00019d180, {0x4259520, 0x25, 0x25}, {0xc000066198, 0xffffffffffffffff, 0x42797c0})
	/usr/local/go/src/testing/testing.go:1596 +0x43f
testing.(*M).Run(0xc00019d180)
	/usr/local/go/src/testing/testing.go:1504 +0x51d
k8s.io/minikube/test/integration.TestMain(0x101000004838108)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x9e
main.main()
	_testmain.go:117 +0x165

                                                
                                                
goroutine 6 [syscall]:
syscall.syscall(0x107e6e0, 0x8, 0x33, 0x0)
	/usr/local/go/src/runtime/sys_darwin.go:22 +0x3b
syscall.fcntl(0x10e, 0x40000, 0x0)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:320 +0x30
internal/poll.(*FD).Fsync.func1(...)
	/usr/local/go/src/internal/poll/fd_fsync_darwin.go:18
internal/poll.ignoringEINTR(...)
	/usr/local/go/src/internal/poll/fd_posix.go:75
internal/poll.(*FD).Fsync(0xc000fea010)
	/usr/local/go/src/internal/poll/fd_fsync_darwin.go:17 +0xfc
os.(*File).Sync(0xc000fea010)
	/usr/local/go/src/os/file_posix.go:169 +0x4e
k8s.io/klog/v2.(*syncBuffer).Sync(0xc001482090)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.40.1/klog.go:1183 +0x1d
k8s.io/klog/v2.(*loggingT).flushAll(0x4279ce0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.40.1/klog.go:1303 +0x6f
k8s.io/klog/v2.(*loggingT).lockAndFlushAll(0x4279ce0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.40.1/klog.go:1291 +0x4a
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.40.1/klog.go:1284 +0x5b
created by k8s.io/klog/v2.init.0
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.40.1/klog.go:420 +0xfb

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00019c580)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.23.0/stats/view/worker.go:276 +0xb9
created by go.opencensus.io/stats/view.init.0
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.23.0/stats/view/worker.go:34 +0x92

                                                
                                                
goroutine 994 [select, 103 minutes]:
net/http.(*persistConn).readLoop(0xc0011c07e0)
	/usr/local/go/src/net/http/transport.go:2207 +0xd8a
created by net/http.(*Transport).dialConn
	/usr/local/go/src/net/http/transport.go:1747 +0x1e05

                                                
                                                
goroutine 785 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c780c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2369 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc000346dc0}, 0xc0013ec288, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc000346dc0}, 0x38, 0x1593385, 0xc000e621d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc000346dc0}, 0x100529d, 0xc0017244e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000698fd0, 0x1341d0d, 0xc000ce8480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 786 [chan receive, 105 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0014b9dc0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 3020 [select, 13 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2404 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0012a26d0, 0x1a)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0011f2780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0012a2740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0d1fe8)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2bdda08, {0x31a2480, 0xc000cefb00}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x11126c0, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0017d1200, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 250 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00010d8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 238 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00059c310, 0x2d)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00010d5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00059c3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x48ee1e8)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000696f68, {0x31a2480, 0xc00014cd80}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x31a2d20, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0xc000696fd0, 0x130c8e6, 0xc0009d6420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 251 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00059c3c0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 1265 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00011c950, 0x28)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000066900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00011ca80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc093c10)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0016a7bc0, {0x31a2480, 0xc000eb0240}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00054efb8, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014efc80, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 1266 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc000346ac0}, 0xc001008060, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc000346ac0}, 0x38, 0x1593385, 0xc000ff6240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc000346ac0}, 0x100529d, 0xc0017241e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000697fd0, 0x1341d0d, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 239 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc0007fb340}, 0xc00078de48, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc0007fb340}, 0x38, 0x1593385, 0xc000411730)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc0007fb340}, 0xc000346d50, 0xc0009de5a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000897d0, 0x130c8e6, 0xc000c949c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2224 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1225 +0x311
testing.tRunner(0xc0003eb040, 0x2f1c530)
	/usr/local/go/src/testing/testing.go:1265 +0x13b
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 2458 [select, 40 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 779 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc0007fb7c0}, 0xc00078dd28, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc0007fb7c0}, 0x38, 0x1593385, 0xc000fadc40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc0007fb7c0}, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00069afd0, 0x1341d0d, 0xc0014b9480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 1249 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000067560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 3241 [chan receive]:
testing.(*T).Run(0xc000e3bd40, {0x2be3b7c, 0x1112573}, 0xc00148c500)
	/usr/local/go/src/testing/testing.go:1307 +0x375
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000e3bd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:140 +0x4ae
testing.tRunner(0xc000e3bd40, 0xc00148c480)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 240 [select, 116 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 257 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2528 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0006457d0, 0x19)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001221980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000645800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc1e3280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013d05a0, {0x31a2480, 0xc000fc24b0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000dcd7b8, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc000804480, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 1268 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1820 [chan receive, 46 minutes]:
testing.(*T).Run(0xc0003eaea0, {0x2bda934, 0x61fb2b7d}, 0x2f1c530)
	/usr/local/go/src/testing/testing.go:1307 +0x375
k8s.io/minikube/test/integration.TestStartStop(0xc0010264e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:45 +0x3b
testing.tRunner(0xc0003eaea0, 0x2f1c538)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 2406 [select, 42 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 1250 [chan receive, 99 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00011ca80, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2405 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc000f13a00}, 0xc001008cf0, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc000f13a00}, 0x38, 0x1593385, 0xc000f02f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc000f13a00}, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000dcbfd0, 0x1596746, 0xc0011f28a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2534 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001221aa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2572 [select, 38 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2424 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0012a2740, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2573 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1784 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc001470040}, 0xc001008138, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc001470040}, 0x38, 0x1593385, 0xc001410280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc001470040}, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000699fd0, 0x1341d0d, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 3019 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc0014b8380}, 0xc000ee4228, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc0014b8380}, 0x38, 0x1593385, 0xc0013da410)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc0014b8380}, 0x100529d, 0xc000c784e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00128dfd0, 0x1341d0d, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2587 [chan receive, 38 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000fd0440, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2459 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2547 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 995 [select, 103 minutes]:
net/http.(*persistConn).writeLoop(0xc0011c07e0)
	/usr/local/go/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
	/usr/local/go/src/net/http/transport.go:1748 +0x1e65

                                                
                                                
goroutine 2352 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000fd07d0, 0x1a)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0017fb800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000fd0800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc1e2290)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017fbbc0, {0x31a2480, 0xc000dba570}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000696fb8, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0017d0c00, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 617 [IO wait, 107 minutes]:
internal/poll.runtime_pollWait(0xc34ea90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc000ce8380, 0x4, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000ce8380)
	/usr/local/go/src/internal/poll/fd_unix.go:402 +0x22c
net.(*netFD).accept(0xc000ce8380)
	/usr/local/go/src/net/fd_unix.go:173 +0x35
net.(*TCPListener).accept(0xc000fa2450)
	/usr/local/go/src/net/tcpsock_posix.go:140 +0x28
net.(*TCPListener).Accept(0xc000fa2450)
	/usr/local/go/src/net/tcpsock.go:262 +0x3d
net/http.(*Server).Serve(0xc0006fe000, {0x31d4a80, 0xc000fa2450})
	/usr/local/go/src/net/http/server.go:3002 +0x394
net/http.(*Server).ListenAndServe(0xc0006fe000)
	/usr/local/go/src/net/http/server.go:2931 +0x7d
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd, 0xc000703ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2076 +0x1e
created by k8s.io/minikube/test/integration.startHTTPProxy
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2075 +0x149

                                                
                                                
goroutine 1794 [chan receive, 63 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0003478c0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2545 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc0007fa140}, 0xc0000511e8, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc0007fa140}, 0x38, 0x1593385, 0xc0010141a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc0007fa140}, 0x100529d, 0xc000f5c360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000897d0, 0x1341d0d, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 1783 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000347750, 0x1f)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000cd8840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0003478c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0d0a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000f5de00, {0x31a2480, 0xc000efe2d0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00031bfb8, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014eede0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2698 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000fd0fc0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 778 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0014b9d90, 0x29)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000975b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014b9dc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc3bf4c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x31343a33313a3631, {0x31a2480, 0xc001496ea0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x3331363132303230, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc000be59e0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2586 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0005e3440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2570 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000fd0410, 0x18)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0005e3320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000fd0440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc043638)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x31a2480, 0xc000f581e0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017d0ea0, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0008050e0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 780 [select, 105 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 781 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2571 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc00084cb40}, 0xc001305200, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc00084cb40}, 0x38, 0x1593385, 0xc000e62520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc00084cb40}, 0x100529d, 0xc00068daa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000dccfd0, 0x1341d0d, 0xc001221aa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2979 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc000fd0280}, 0xc0013ec090, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc000fd0280}, 0x38, 0x1593385, 0xc000f8e170)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc000fd0280}, 0x100529d, 0xc000c794a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000e087d0, 0x1341d0d, 0x74736e692d767379)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 1786 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1785 [select, 63 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 1793 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000cd8960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2457 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc001470000}, 0xc001182000, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc001470000}, 0x38, 0x1593385, 0xc0013da050)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc001470000}, 0x1594380, 0xc000699fd0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0, 0xc0017d1b60, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2535 [chan receive, 38 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000645800, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 1267 [select, 99 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2371 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2841 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007fb040, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2354 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000fd0800, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2423 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0011f28a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2246 [chan receive]:
testing.(*T).Run(0xc001315520, {0x2bdd273, 0xc000d12f20}, 0xc00148c480)
	/usr/local/go/src/testing/testing.go:1307 +0x375
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001315520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:115 +0x725
testing.tRunner(0xc001315520, 0xc000fd0240)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 2471 [chan receive, 40 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000e2e8c0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2456 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000e2e890, 0x1a)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000129860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000e2e8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0d0a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x6231313931633266, {0x31a2480, 0xc000efe090}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x696d6d6f43636e75, 0x3b9aca00, 0x0, 0x32, 0x6638336138306466)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x3a64657463657078, 0x3136653434363231, 0x6435306235326534)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2353 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017fb920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2470 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000129980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2546 [select, 38 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2370 [select, 42 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2407 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2697 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000129740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2709 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000fd0f90, 0x16)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000129620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000fd0fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc093da8)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x319d3c0, {0x31a2480, 0xc000eb6db0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00031a7b8, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc000064ae0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2710 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc0014b8640}, 0xc0013ec5d0, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc0014b8640}, 0x38, 0x1593385, 0xc001492830)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc0014b8640}, 0x100529d, 0xc00068daa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000dccfd0, 0x1341d0d, 0xc001221aa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2711 [select, 31 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2712 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2978 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000e2ed10, 0x2)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0011f25a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000e2ed40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x48ee1e8)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x31a2480, 0xc00014c420}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000805f20, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0008053e0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2972 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000e2ed40, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2855 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x31de420, 0xc000e2e4c0}, 0xc0011823d8, 0x1593f0a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x31de420, 0xc000e2e4c0}, 0x38, 0x1593385, 0xc001056b70)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x31de420, 0xc000e2e4c0}, 0x100529d, 0xc0012213e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012e77d0, 0x1341d0d, 0xc00148c580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 3243 [IO wait]:
internal/poll.runtime_pollWait(0xc34ed48, 0x72)
	/usr/local/go/src/runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc000f5cc00, 0xc000f9af65, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000f5cc00, {0xc000f9af65, 0x29b, 0x29b})
	/usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:32
os.(*File).Read(0xc001396fa8, {0xc000f9af65, 0xc000e05ea0, 0xc000e05ea0})
	/usr/local/go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc0005ffe60, {0x31a3140, 0xc001396fa8})
	/usr/local/go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0x319d3c0, 0xc0005ffe60}, {0x31a3140, 0xc001396fa8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:409 +0x14b
io.Copy(...)
	/usr/local/go/src/io/io.go:382
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0x0)
	/usr/local/go/src/os/exec/exec.go:441 +0x25
created by os/exec.(*Cmd).Start
	/usr/local/go/src/os/exec/exec.go:440 +0x80d

                                                
                                                
goroutine 2981 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2971 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0011f2900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2980 [select, 15 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2840 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001292d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2854 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0007fb010, 0x13)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001292c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007fb040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc1e1b20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x30333a3731203230, {0x31a2480, 0xc000d98de0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x65646e69287b7b27, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc000804600, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2856 [select, 20 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2857 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 3035 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000fd0c40, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 3021 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 3018 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000fd0c10, 0x2)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31c82a8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0011f32c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000fd0c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc1e1b20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2f, {0x31a2480, 0xc000d98630}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x11126c0, 0x3b9aca00, 0x0, 0x1, 0x103f0c5)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc000065aa0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 3034 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0011f33e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 3242 [syscall]:
syscall.syscall6(0x107e500, 0x18059, 0xc000d8bb3c, 0x0, 0xc000f7ea20, 0x0, 0x0)
	/usr/local/go/src/runtime/sys_darwin.go:44 +0x3b
syscall.wait4(0xc000d8bb40, 0x100d487, 0x90, 0x2b68980)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x48
syscall.Wait4(0xc000530b00, 0xc000d8bb74, 0xc000d8baf8, 0x0)
	/usr/local/go/src/syscall/syscall_bsd.go:145 +0x2b
os.(*Process).wait(0xc000f65da0)
	/usr/local/go/src/os/exec_unix.go:44 +0x77
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:132
os/exec.(*Cmd).Wait(0xc000eeb600)
	/usr/local/go/src/os/exec/exec.go:507 +0x54
os/exec.(*Cmd).Run(0xc0000fbb00)
	/usr/local/go/src/os/exec/exec.go:341 +0x39
k8s.io/minikube/test/integration.Run(0xc0010e2000, 0xc000eeb600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:104 +0x1f5
k8s.io/minikube/test/integration.validateFirstStart({0x31de490, 0xc000f5cae0}, 0xc0010e2000, {0xc001192300, 0x1fb9ac376acc}, {0xfb9640, 0xfb96400128c758}, {0x61fb39e4, 0xc00128c760}, {0xc00020db00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:171 +0x175
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x1)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:141 +0x72
testing.tRunner(0xc0010e2000, 0xc00148c500)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 3244 [IO wait]:
internal/poll.runtime_pollWait(0xc34e608, 0x72)
	/usr/local/go/src/runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc000f5ccc0, 0xc001345006, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000f5ccc0, {0xc001345006, 0xcdfa, 0xcdfa})
	/usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:32
os.(*File).Read(0xc001396fc0, {0xc001345006, 0x3bd2728, 0xc00128c6a0})
	/usr/local/go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc000da8000, {0x31a3140, 0xc001396fc0})
	/usr/local/go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0x319d3c0, 0xc000da8000}, {0x31a3140, 0xc001396fc0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:409 +0x14b
io.Copy(...)
	/usr/local/go/src/io/io.go:382
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0xc00148c500)
	/usr/local/go/src/os/exec/exec.go:441 +0x25
created by os/exec.(*Cmd).Start
	/usr/local/go/src/os/exec/exec.go:440 +0x80d

                                                
                                                
goroutine 3245 [select]:
os/exec.(*Cmd).Start.func2()
	/usr/local/go/src/os/exec/exec.go:449 +0x7b
created by os/exec.(*Cmd).Start
	/usr/local/go/src/os/exec/exec.go:448 +0x7ef

                                                
                                    

Test pass (201/227)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 31.14
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.23.2/json-events 9.77
11 TestDownloadOnly/v1.23.2/preload-exists 0
14 TestDownloadOnly/v1.23.2/kubectl 0
15 TestDownloadOnly/v1.23.2/LogsDuration 0.28
17 TestDownloadOnly/v1.23.3-rc.0/json-events 11.79
18 TestDownloadOnly/v1.23.3-rc.0/preload-exists 0
21 TestDownloadOnly/v1.23.3-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.3-rc.0/LogsDuration 0.28
23 TestDownloadOnly/DeleteAll 1.13
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.64
25 TestDownloadOnlyKic 9.04
26 TestBinaryMirror 1.95
27 TestOffline 134.38
29 TestAddons/Setup 185.73
34 TestAddons/parallel/HelmTiller 9.68
36 TestAddons/parallel/CSI 49.07
38 TestAddons/serial/GCPAuth 18.78
39 TestAddons/StoppedEnableDisable 19.29
40 TestCertOptions 85.34
41 TestCertExpiration 276.49
42 TestDockerFlags 97.28
43 TestForceSystemdFlag 339.94
44 TestForceSystemdEnv 82.77
46 TestHyperKitDriverInstallOrUpdate 7.8
49 TestErrorSpam/setup 79.04
50 TestErrorSpam/start 2.42
51 TestErrorSpam/status 1.96
52 TestErrorSpam/pause 2.2
53 TestErrorSpam/unpause 2.32
54 TestErrorSpam/stop 18.98
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 131.54
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 8.03
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 1.78
65 TestFunctional/serial/CacheCmd/cache/add_remote 10.13
66 TestFunctional/serial/CacheCmd/cache/add_local 2.13
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
68 TestFunctional/serial/CacheCmd/cache/list 0.07
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.68
70 TestFunctional/serial/CacheCmd/cache/cache_reload 4.1
71 TestFunctional/serial/CacheCmd/cache/delete 0.14
72 TestFunctional/serial/MinikubeKubectlCmd 0.48
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.58
74 TestFunctional/serial/ExtraConfig 27.36
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 3.09
77 TestFunctional/serial/LogsFileCmd 3.17
79 TestFunctional/parallel/ConfigCmd 0.41
80 TestFunctional/parallel/DashboardCmd 3.91
81 TestFunctional/parallel/DryRun 1.58
82 TestFunctional/parallel/InternationalLanguage 0.72
83 TestFunctional/parallel/StatusCmd 2.38
87 TestFunctional/parallel/AddonsCmd 0.36
88 TestFunctional/parallel/PersistentVolumeClaim 34.02
90 TestFunctional/parallel/SSHCmd 1.43
91 TestFunctional/parallel/CpCmd 2.87
92 TestFunctional/parallel/MySQL 22.04
93 TestFunctional/parallel/FileSync 0.75
94 TestFunctional/parallel/CertSync 4.4
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
103 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
105 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.72
106 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
107 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 9.76
111 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.92
113 TestFunctional/parallel/ProfileCmd/profile_list 0.75
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.83
115 TestFunctional/parallel/MountCmd/any-port 12.06
116 TestFunctional/parallel/Version/short 0.1
117 TestFunctional/parallel/Version/components 1.34
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.44
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.44
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.44
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.43
122 TestFunctional/parallel/ImageCommands/ImageBuild 5.99
123 TestFunctional/parallel/ImageCommands/Setup 4.1
124 TestFunctional/parallel/MountCmd/specific-port 3.62
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.85
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.97
127 TestFunctional/parallel/DockerEnv/bash 2.63
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.49
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.36
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.89
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.36
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.69
133 TestFunctional/parallel/ImageCommands/ImageRemove 1.38
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.67
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.6
136 TestFunctional/delete_addon-resizer_images 0.28
137 TestFunctional/delete_my-image_image 0.12
138 TestFunctional/delete_minikube_cached_images 0.12
141 TestIngressAddonLegacy/StartLegacyK8sCluster 138.11
143 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.42
144 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.62
148 TestJSONOutput/start/Command 133.63
149 TestJSONOutput/start/Audit 0
151 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/pause/Command 1.57
155 TestJSONOutput/pause/Audit 0
157 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/unpause/Command 0.85
161 TestJSONOutput/unpause/Audit 0
163 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/stop/Command 18.29
167 TestJSONOutput/stop/Audit 0
169 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
171 TestErrorJSONOutput 0.79
173 TestKicCustomNetwork/create_custom_network 96.53
174 TestKicCustomNetwork/use_default_bridge_network 84.26
175 TestKicExistingNetwork 98.15
176 TestMainNoArgs 0.07
179 TestMountStart/serial/StartWithMountFirst 54.09
180 TestMountStart/serial/VerifyMountFirst 0.63
181 TestMountStart/serial/StartWithMountSecond 55.33
182 TestMountStart/serial/VerifyMountSecond 0.62
183 TestMountStart/serial/DeleteFirst 13.51
184 TestMountStart/serial/VerifyMountPostDelete 0.62
185 TestMountStart/serial/Stop 8.23
186 TestMountStart/serial/RestartStopped 33.32
187 TestMountStart/serial/VerifyMountPostStop 0.63
190 TestMultiNode/serial/FreshStart2Nodes 245.41
191 TestMultiNode/serial/DeployApp2Nodes 9.17
192 TestMultiNode/serial/PingHostFrom2Pods 0.86
193 TestMultiNode/serial/AddNode 119.42
194 TestMultiNode/serial/ProfileList 0.71
195 TestMultiNode/serial/CopyFile 23.12
196 TestMultiNode/serial/StopNode 11.32
197 TestMultiNode/serial/StartAfterStop 54.76
198 TestMultiNode/serial/RestartKeepsNodes 266.1
199 TestMultiNode/serial/DeleteNode 18.19
200 TestMultiNode/serial/StopMultiNode 37.46
201 TestMultiNode/serial/RestartMultiNode 147.1
202 TestMultiNode/serial/ValidateNameConflict 107.64
206 TestPreload 222.15
208 TestScheduledStopUnix 162.76
209 TestSkaffold 132.92
211 TestInsufficientStorage 72.41
212 TestRunningBinaryUpgrade 125.73
214 TestKubernetesUpgrade 220.12
215 TestMissingContainerUpgrade 197.12
227 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.58
228 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.55
229 TestStoppedBinaryUpgrade/Setup 0.75
230 TestStoppedBinaryUpgrade/Upgrade 142.99
232 TestPause/serial/Start 111.09
233 TestStoppedBinaryUpgrade/MinikubeLogs 2.82
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.32
243 TestNoKubernetes/serial/StartWithK8s 60.98
244 TestNoKubernetes/serial/StartWithStopK8s 29.07
245 TestPause/serial/SecondStartNoReconfiguration 7.86
246 TestPause/serial/Pause 0.86
247 TestPause/serial/VerifyStatus 0.65
248 TestPause/serial/Unpause 0.89
249 TestNoKubernetes/serial/Start 37.86
250 TestPause/serial/PauseAgain 1.02
251 TestPause/serial/DeletePaused 15.5
252 TestPause/serial/VerifyDeletedResources 5.88
253 TestNetworkPlugins/group/auto/Start 93.81
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.65
255 TestNoKubernetes/serial/ProfileList 2.22
256 TestNoKubernetes/serial/Stop 2.03
257 TestNoKubernetes/serial/StartNoArgs 16.93
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.62
259 TestNetworkPlugins/group/false/Start 115.6
260 TestNetworkPlugins/group/auto/KubeletFlags 0.73
261 TestNetworkPlugins/group/auto/NetCatPod 15.92
262 TestNetworkPlugins/group/auto/DNS 0.15
263 TestNetworkPlugins/group/auto/Localhost 0.13
264 TestNetworkPlugins/group/auto/HairPin 5.13
265 TestNetworkPlugins/group/cilium/Start 128.25
266 TestNetworkPlugins/group/false/KubeletFlags 0.66
267 TestNetworkPlugins/group/false/NetCatPod 15.96
268 TestNetworkPlugins/group/false/DNS 0.16
269 TestNetworkPlugins/group/false/Localhost 0.2
270 TestNetworkPlugins/group/false/HairPin 5.14
272 TestNetworkPlugins/group/cilium/ControllerPod 5.02
273 TestNetworkPlugins/group/cilium/KubeletFlags 0.76
274 TestNetworkPlugins/group/cilium/NetCatPod 16.36
275 TestNetworkPlugins/group/cilium/DNS 0.18
276 TestNetworkPlugins/group/cilium/Localhost 0.16
277 TestNetworkPlugins/group/cilium/HairPin 0.16
278 TestNetworkPlugins/group/custom-weave/Start 69.73
279 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.68
280 TestNetworkPlugins/group/custom-weave/NetCatPod 16.13
281 TestNetworkPlugins/group/enable-default-cni/Start 60.83
282 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.71
283 TestNetworkPlugins/group/enable-default-cni/NetCatPod 17.06
286 TestNetworkPlugins/group/bridge/Start 72.36
287 TestNetworkPlugins/group/bridge/KubeletFlags 0.67
288 TestNetworkPlugins/group/bridge/NetCatPod 18.95
x
+
TestDownloadOnly/v1.16.0/json-events (31.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220202161228-76172 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220202161228-76172 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (31.137108928s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (31.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220202161228-76172
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220202161228-76172: exit status 85 (283.651186ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 16:12:28
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0202 16:12:28.826489   76189 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:12:28.826621   76189 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:12:28.826627   76189 out.go:310] Setting ErrFile to fd 2...
	I0202 16:12:28.826630   76189 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:12:28.826696   76189 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	W0202 16:12:28.826778   76189 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: no such file or directory
	I0202 16:12:28.827218   76189 out.go:304] Setting JSON to true
	I0202 16:12:28.854186   76189 start.go:112] hostinfo: {"hostname":"37309.local","uptime":27723,"bootTime":1643819425,"procs":371,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 16:12:28.854295   76189 start.go:120] gopshost.Virtualization returned error: not implemented yet
	W0202 16:12:28.880410   76189 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball: no such file or directory
	I0202 16:12:28.880431   76189 notify.go:174] Checking for updates...
	I0202 16:12:28.906954   76189 driver.go:344] Setting default libvirt URI to qemu:///system
	W0202 16:12:28.993801   76189 docker.go:108] docker version returned error: exit status 1
	I0202 16:12:29.036078   76189 start.go:281] selected driver: docker
	I0202 16:12:29.036099   76189 start.go:798] validating driver "docker" against <nil>
	I0202 16:12:29.036212   76189 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:12:29.203021   76189 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:12:29.255652   76189 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:12:29.424319   76189 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:12:29.451241   76189 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0202 16:12:29.502789   76189 start_flags.go:369] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0202 16:12:29.502949   76189 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0202 16:12:29.502974   76189 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0202 16:12:29.503003   76189 cni.go:93] Creating CNI manager for ""
	I0202 16:12:29.503018   76189 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 16:12:29.503031   76189 start_flags.go:302] config:
	{Name:download-only-20220202161228-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220202161228-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:12:29.544977   76189 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 16:12:29.570935   76189 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 16:12:29.570935   76189 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0202 16:12:29.571153   76189 cache.go:107] acquiring lock: {Name:mke8ea1b66921f2c689172c69a19e4ace96fbed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.571167   76189 cache.go:107] acquiring lock: {Name:mkc4ebc0d433dcc72fb406bb2db838dbd26ab262 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.572483   76189 cache.go:107] acquiring lock: {Name:mk85f352f5e70198e9fe28a00f57421ea466689f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.572870   76189 cache.go:107] acquiring lock: {Name:mkbefd087cdc098578fcd5791e90ddf2c13d0cff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.573106   76189 cache.go:107] acquiring lock: {Name:mk9543abbc8412158a4660906af7b17c96cf48d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.573485   76189 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/download-only-20220202161228-76172/config.json ...
	I0202 16:12:29.573479   76189 cache.go:107] acquiring lock: {Name:mkf96a9fc2fb71cd9c33094065eb0284ea47e1b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.573493   76189 cache.go:107] acquiring lock: {Name:mkdd058791262c32cfb57817f1fa92d31217e943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.573511   76189 cache.go:107] acquiring lock: {Name:mkc18dd6e8a8da34cd27e467ad4e44fdd52182b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.573535   76189 cache.go:107] acquiring lock: {Name:mk186c58f3593b5d04bfc90b1ba5bab7a5b049a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.573538   76189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/download-only-20220202161228-76172/config.json: {Name:mkd2156ab03045cb8a2bb7ef7dd315f118281431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 16:12:29.573580   76189 cache.go:107] acquiring lock: {Name:mkf5cb7868032ebb76f371ff29615698cfada424 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 16:12:29.573748   76189 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0202 16:12:29.573996   76189 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0202 16:12:29.574181   76189 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0202 16:12:29.574425   76189 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0202 16:12:29.574452   76189 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0202 16:12:29.574464   76189 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0202 16:12:29.574546   76189 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0202 16:12:29.574581   76189 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0202 16:12:29.574753   76189 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0202 16:12:29.574831   76189 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0202 16:12:29.574858   76189 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0202 16:12:29.575694   76189 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/linux/v1.16.0/kubeadm
	I0202 16:12:29.575718   76189 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/linux/v1.16.0/kubectl
	I0202 16:12:29.575741   76189 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/linux/v1.16.0/kubelet
	I0202 16:12:29.576486   76189 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579314   76189 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579330   76189 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579340   76189 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579279   76189 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579317   76189 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579362   76189 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579829   76189 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579871   76189 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.579953   76189 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0202 16:12:29.687528   76189 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
	I0202 16:12:29.687697   76189 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory
	I0202 16:12:29.687795   76189 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
	I0202 16:12:30.826884   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I0202 16:12:30.827756   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I0202 16:12:31.150240   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I0202 16:12:31.165950   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
	I0202 16:12:31.262626   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
	I0202 16:12:31.265842   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
	I0202 16:12:31.319095   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
	I0202 16:12:31.325466   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
	I0202 16:12:31.366137   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0202 16:12:31.366156   76189 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 1.793694447s
	I0202 16:12:31.366168   76189 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0202 16:12:31.381566   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0202 16:12:31.520413   76189 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0202 16:12:31.557523   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0202 16:12:31.557542   76189 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 1.98637618s
	I0202 16:12:31.557555   76189 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0202 16:12:32.719311   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0202 16:12:32.719336   76189 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 3.146767321s
	I0202 16:12:32.719349   76189 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0202 16:12:33.240849   76189 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/darwin/v1.16.0/kubectl
	I0202 16:12:33.633234   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0202 16:12:33.633255   76189 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.060881801s
	I0202 16:12:33.633264   76189 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0202 16:12:33.757633   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
	I0202 16:12:33.757650   76189 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2" took 4.185084042s
	I0202 16:12:33.757658   76189 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
	I0202 16:12:35.258614   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0202 16:12:35.258647   76189 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0" took 5.686972177s
	I0202 16:12:35.258682   76189 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0202 16:12:35.338598   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0202 16:12:35.338617   76189 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0" took 5.766101665s
	I0202 16:12:35.338627   76189 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0202 16:12:35.760176   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0202 16:12:35.760194   76189 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0" took 6.18778798s
	I0202 16:12:35.760203   76189 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0202 16:12:35.764817   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0202 16:12:35.764832   76189 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0" took 6.193602307s
	I0202 16:12:35.764840   76189 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0202 16:12:36.116622   76189 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
	I0202 16:12:36.116640   76189 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0" took 6.54438154s
	I0202 16:12:36.116648   76189 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0202 16:12:36.116658   76189 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220202161228-76172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/json-events (9.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220202161228-76172 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220202161228-76172 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=docker --driver=docker : (9.773553368s)
--- PASS: TestDownloadOnly/v1.23.2/json-events (9.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/preload-exists
--- PASS: TestDownloadOnly/v1.23.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/kubectl
--- PASS: TestDownloadOnly/v1.23.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220202161228-76172
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220202161228-76172: exit status 85 (277.233954ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 16:13:00
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0202 16:13:00.611612   76262 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:13:00.611744   76262 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:13:00.611750   76262 out.go:310] Setting ErrFile to fd 2...
	I0202 16:13:00.611753   76262 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:13:00.611834   76262 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	W0202 16:13:00.611915   76262 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: no such file or directory
	I0202 16:13:00.612078   76262 out.go:304] Setting JSON to true
	I0202 16:13:00.637754   76262 start.go:112] hostinfo: {"hostname":"37309.local","uptime":27755,"bootTime":1643819425,"procs":369,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 16:13:00.637847   76262 start.go:120] gopshost.Virtualization returned error: not implemented yet
	W0202 16:13:00.664867   76262 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball: no such file or directory
	I0202 16:13:00.664918   76262 notify.go:174] Checking for updates...
	I0202 16:13:00.691992   76262 config.go:176] Loaded profile config "download-only-20220202161228-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0202 16:13:00.692048   76262 start.go:706] api.Load failed for download-only-20220202161228-76172: filestore "download-only-20220202161228-76172": Docker machine "download-only-20220202161228-76172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0202 16:13:00.692098   76262 driver.go:344] Setting default libvirt URI to qemu:///system
	W0202 16:13:00.692123   76262 start.go:706] api.Load failed for download-only-20220202161228-76172: filestore "download-only-20220202161228-76172": Docker machine "download-only-20220202161228-76172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0202 16:13:00.785554   76262 docker.go:132] docker version: linux-20.10.6
	I0202 16:13:00.785682   76262 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:13:00.962878   76262 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-02-03 00:13:00.903672675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:13:00.989551   76262 start.go:281] selected driver: docker
	I0202 16:13:00.989561   76262 start.go:798] validating driver "docker" against &{Name:download-only-20220202161228-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220202161228-76172 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:13:00.989881   76262 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:13:01.165712   76262 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-02-03 00:13:01.106457459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:13:01.167722   76262 cni.go:93] Creating CNI manager for ""
	I0202 16:13:01.167742   76262 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 16:13:01.167754   76262 start_flags.go:302] config:
	{Name:download-only-20220202161228-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:download-only-20220202161228-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:13:01.194591   76262 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 16:13:01.220160   76262 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 16:13:01.220167   76262 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 16:13:01.289889   76262 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 16:13:01.289912   76262 cache.go:57] Caching tarball of preloaded images
	I0202 16:13:01.290096   76262 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 16:13:01.316387   76262 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 ...
	I0202 16:13:01.355732   76262 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 16:13:01.355753   76262 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 16:13:01.406697   76262 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4?checksum=md5:6fa926c88a747ae43bb3bda5a3741fe2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220202161228-76172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.2/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/json-events (11.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220202161228-76172 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220202161228-76172 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=docker --driver=docker : (11.789268045s)
--- PASS: TestDownloadOnly/v1.23.3-rc.0/json-events (11.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.3-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.3-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220202161228-76172
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220202161228-76172: exit status 85 (275.031017ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 16:13:10
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0202 16:13:10.663894   76291 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:13:10.664018   76291 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:13:10.664024   76291 out.go:310] Setting ErrFile to fd 2...
	I0202 16:13:10.664027   76291 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:13:10.664098   76291 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	W0202 16:13:10.664180   76291 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: no such file or directory
	I0202 16:13:10.664331   76291 out.go:304] Setting JSON to true
	I0202 16:13:10.689756   76291 start.go:112] hostinfo: {"hostname":"37309.local","uptime":27765,"bootTime":1643819425,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 16:13:10.689854   76291 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0202 16:13:10.716948   76291 notify.go:174] Checking for updates...
	I0202 16:13:10.743797   76291 config.go:176] Loaded profile config "download-only-20220202161228-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	W0202 16:13:10.743857   76291 start.go:706] api.Load failed for download-only-20220202161228-76172: filestore "download-only-20220202161228-76172": Docker machine "download-only-20220202161228-76172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0202 16:13:10.743940   76291 driver.go:344] Setting default libvirt URI to qemu:///system
	W0202 16:13:10.743965   76291 start.go:706] api.Load failed for download-only-20220202161228-76172: filestore "download-only-20220202161228-76172": Docker machine "download-only-20220202161228-76172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0202 16:13:10.838246   76291 docker.go:132] docker version: linux-20.10.6
	I0202 16:13:10.838354   76291 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:13:11.016422   76291 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-02-03 00:13:10.963388281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:13:11.043487   76291 start.go:281] selected driver: docker
	I0202 16:13:11.043497   76291 start.go:798] validating driver "docker" against &{Name:download-only-20220202161228-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:download-only-20220202161228-76172 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:13:11.043810   76291 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:13:11.218497   76291 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-02-03 00:13:11.167991664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:13:11.220627   76291 cni.go:93] Creating CNI manager for ""
	I0202 16:13:11.220646   76291 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 16:13:11.220662   76291 start_flags.go:302] config:
	{Name:download-only-20220202161228-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3-rc.0 ClusterName:download-only-20220202161228-76172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:13:11.247807   76291 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 16:13:11.274353   76291 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 16:13:11.274353   76291 preload.go:132] Checking if preload exists for k8s version v1.23.3-rc.0 and runtime docker
	I0202 16:13:11.349313   76291 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3-rc.0/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4
	I0202 16:13:11.349336   76291 cache.go:57] Caching tarball of preloaded images
	I0202 16:13:11.349981   76291 preload.go:132] Checking if preload exists for k8s version v1.23.3-rc.0 and runtime docker
	I0202 16:13:11.376280   76291 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0202 16:13:11.389086   76291 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 16:13:11.389110   76291 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 16:13:11.474941   76291 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3-rc.0/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:98a0ed725de43435c7e0fb42aa7ffb00 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4
	I0202 16:13:20.534428   76291 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0202 16:13:20.534582   76291 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220202161228-76172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:193: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.130174725s)
--- PASS: TestDownloadOnly/DeleteAll (1.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220202161228-76172
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.64s)

                                                
                                    
x
+
TestDownloadOnlyKic (9.04s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220202161325-76172 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:230: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220202161325-76172 --force --alsologtostderr --driver=docker : (7.45812999s)
helpers_test.go:176: Cleaning up "download-docker-20220202161325-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220202161325-76172
--- PASS: TestDownloadOnlyKic (9.04s)

                                                
                                    
x
+
TestBinaryMirror (1.95s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220202161334-76172 --alsologtostderr --binary-mirror http://127.0.0.1:55327 --driver=docker 
aaa_download_only_test.go:316: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220202161334-76172 --alsologtostderr --binary-mirror http://127.0.0.1:55327 --driver=docker : (1.042225344s)
helpers_test.go:176: Cleaning up "binary-mirror-20220202161334-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220202161334-76172
--- PASS: TestBinaryMirror (1.95s)

                                                
                                    
x
+
TestOffline (134.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220202171133-76172 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220202171133-76172 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m52.703977181s)
helpers_test.go:176: Cleaning up "offline-docker-20220202171133-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220202171133-76172
E0202 17:13:30.184498   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220202171133-76172: (21.678327519s)
--- PASS: TestOffline (134.38s)

                                                
                                    
x
+
TestAddons/Setup (185.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220202161336-76172 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220202161336-76172 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m5.7295927s)
--- PASS: TestAddons/Setup (185.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.68s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 6.703558ms
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-6d67d5465d-xk6zs" [c7ca8081-d3ba-4140-9328-5c0b504abbb4] Running
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009618665s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220202161336-76172 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220202161336-76172 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.096430683s)
addons_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220202161336-76172 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 15.239476ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220202161336-76172 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:515: (dbg) Done: kubectl --context addons-20220202161336-76172 create -f testdata/csi-hostpath-driver/pvc.yaml: (2.857244638s)
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220202161336-76172 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220202161336-76172 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [38ab917a-4a2c-46ad-9a39-33e3d17ec0b1] Pending
helpers_test.go:343: "task-pv-pod" [38ab917a-4a2c-46ad-9a39-33e3d17ec0b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [38ab917a-4a2c-46ad-9a39-33e3d17ec0b1] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 23.012364548s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220202161336-76172 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220202161336-76172 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220202161336-76172 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220202161336-76172 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220202161336-76172 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220202161336-76172 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220202161336-76172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220202161336-76172 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [b8ed9dd1-e35c-4f43-91f8-4e9051334f8c] Pending
helpers_test.go:343: "task-pv-pod-restore" [b8ed9dd1-e35c-4f43-91f8-4e9051334f8c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [b8ed9dd1-e35c-4f43-91f8-4e9051334f8c] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.006706006s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220202161336-76172 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220202161336-76172 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220202161336-76172 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220202161336-76172 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:589: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220202161336-76172 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.873351122s)
addons_test.go:593: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220202161336-76172 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (18.78s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220202161336-76172 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ad860050-fffa-4583-87b4-f00b855f2708] Pending
helpers_test.go:343: "busybox" [ad860050-fffa-4583-87b4-f00b855f2708] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [ad860050-fffa-4583-87b4-f00b855f2708] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 11.013356078s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220202161336-76172 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20220202161336-76172 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220202161336-76172 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220202161336-76172 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220202161336-76172 addons disable gcp-auth --alsologtostderr -v=1: (6.715172781s)
--- PASS: TestAddons/serial/GCPAuth (18.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (19.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220202161336-76172
addons_test.go:133: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220202161336-76172: (18.836607155s)
addons_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220202161336-76172
addons_test.go:141: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220202161336-76172
--- PASS: TestAddons/StoppedEnableDisable (19.29s)

                                                
                                    
x
+
TestCertOptions (85.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220202171905-76172 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220202171905-76172 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (1m18.509334553s)
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220202171905-76172 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0202 17:20:23.763002   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220202171905-76172 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220202171905-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220202171905-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220202171905-76172: (5.331910363s)
--- PASS: TestCertOptions (85.34s)

                                                
                                    
x
+
TestCertExpiration (276.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220202171525-76172 --memory=2048 --cert-expiration=3m --driver=docker 
E0202 17:15:37.026517   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:16:17.992180   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:16:25.083536   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 17:16:41.955332   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
cert_options_test.go:124: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220202171525-76172 --memory=2048 --cert-expiration=3m --driver=docker : (1m23.806559408s)
E0202 17:17:39.918466   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:17:42.673764   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 17:18:30.188122   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220202171525-76172 --memory=2048 --cert-expiration=8760h --driver=docker 
E0202 17:19:56.003650   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
cert_options_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220202171525-76172 --memory=2048 --cert-expiration=8760h --driver=docker : (7.102341175s)
helpers_test.go:176: Cleaning up "cert-expiration-20220202171525-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220202171525-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220202171525-76172: (5.576455877s)
--- PASS: TestCertExpiration (276.49s)

                                                
                                    
x
+
TestDockerFlags (97.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220202171348-76172 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0202 17:14:55.993377   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:56.000100   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:56.010380   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:56.035761   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:56.085026   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:56.171095   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:56.336869   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:56.665807   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:57.315881   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:14:58.596337   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:15:01.165950   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:15:06.294434   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
docker_test.go:46: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220202171348-76172 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (1m20.287098105s)
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220202171348-76172 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220202171348-76172 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20220202171348-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220202171348-76172
E0202 17:15:16.538091   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220202171348-76172: (15.62042219s)
--- PASS: TestDockerFlags (97.28s)

                                                
                                    
x
+
TestForceSystemdFlag (339.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220202171325-76172 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220202171325-76172 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (5m22.039685071s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220202171325-76172 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20220202171325-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220202171325-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220202171325-76172: (17.166326946s)
--- PASS: TestForceSystemdFlag (339.94s)

                                                
                                    
x
+
TestForceSystemdEnv (82.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220202171202-76172 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0202 17:12:42.664421   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
docker_test.go:151: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220202171202-76172 --memory=2048 --alsologtostderr -v=5 --driver=docker : (1m6.079338334s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220202171202-76172 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20220202171202-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220202171202-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220202171202-76172: (15.939593108s)
--- PASS: TestForceSystemdEnv (82.77s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.8s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.80s)

                                                
                                    
x
+
TestErrorSpam/setup (79.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220202162320-76172 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 --driver=docker 
error_spam_test.go:79: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220202162320-76172 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 --driver=docker : (1m19.042044315s)
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2."
--- PASS: TestErrorSpam/setup (79.04s)

                                                
                                    
x
+
TestErrorSpam/start (2.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 start --dry-run
--- PASS: TestErrorSpam/start (2.42s)

                                                
                                    
x
+
TestErrorSpam/status (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 status
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 status
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 status
--- PASS: TestErrorSpam/status (1.96s)

                                                
                                    
x
+
TestErrorSpam/pause (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 pause
--- PASS: TestErrorSpam/pause (2.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 unpause
--- PASS: TestErrorSpam/unpause (2.32s)

                                                
                                    
x
+
TestErrorSpam/stop (18.98s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 stop
error_spam_test.go:157: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 stop: (18.225002749s)
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220202162320-76172 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220202162320-76172 stop
--- PASS: TestErrorSpam/stop (18.98s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1715: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/test/nested/copy/76172/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (131.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2097: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
E0202 16:26:41.854774   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:41.863136   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:41.875068   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:41.905217   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:41.955081   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:42.035535   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:42.204732   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:42.526019   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:43.166516   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:44.455975   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:47.020133   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:26:52.145362   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:27:02.386093   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:27:22.870952   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
functional_test.go:2097: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m11.537205662s)
--- PASS: TestFunctional/serial/StartWithProxy (131.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (8.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --alsologtostderr -v=8: (8.032627705s)
functional_test.go:659: soft start took 8.033102796s for "functional-20220202162514-76172" cluster.
--- PASS: TestFunctional/serial/SoftStart (8.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220202162514-76172 get po -A
functional_test.go:692: (dbg) Done: kubectl --context functional-20220202162514-76172 get po -A: (1.780304815s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1050: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add k8s.gcr.io/pause:3.1
functional_test.go:1050: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add k8s.gcr.io/pause:3.1: (2.274546092s)
functional_test.go:1050: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add k8s.gcr.io/pause:3.3
functional_test.go:1050: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add k8s.gcr.io/pause:3.3: (4.019933732s)
functional_test.go:1050: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add k8s.gcr.io/pause:latest
functional_test.go:1050: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add k8s.gcr.io/pause:latest: (3.837347394s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1081: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220202162514-76172 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20220202162514-761722548016161
functional_test.go:1093: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add minikube-local-cache-test:functional-20220202162514-76172
functional_test.go:1093: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache add minikube-local-cache-test:functional-20220202162514-76172: (1.508706581s)
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache delete minikube-local-cache-test:functional-20220202162514-76172
functional_test.go:1087: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220202162514-76172
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1114: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1128: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (632.341848ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1162: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache reload
functional_test.go:1162: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 cache reload: (2.139148541s)
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1176: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1176: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 kubectl -- --context functional-20220202162514-76172 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-20220202162514-76172 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.58s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (27.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0202 16:28:03.835296   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (27.363516161s)
functional_test.go:757: restart took 27.363598724s for "functional-20220202162514-76172" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (27.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220202162514-76172 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:836: kube-apiserver status: Ready
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 logs
functional_test.go:1240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 logs: (3.094190646s)
--- PASS: TestFunctional/serial/LogsCmd (3.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20220202162514-761723069008155/logs.txt
functional_test.go:1257: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20220202162514-761723069008155/logs.txt: (3.168161824s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 config get cpus
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 config get cpus: exit status 14 (45.128298ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1203: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 config set cpus 2
functional_test.go:1203: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 config get cpus
functional_test.go:1203: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 config unset cpus
functional_test.go:1203: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 config get cpus
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 config get cpus: exit status 14 (44.818092ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220202162514-76172 --alsologtostderr -v=1]
2022/02/02 16:29:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:911: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220202162514-76172 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 78778: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (736.013098ms)

                                                
                                                
-- stdout --
	* [functional-20220202162514-76172] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 16:29:08.592283   78714 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:29:08.592421   78714 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:29:08.592429   78714 out.go:310] Setting ErrFile to fd 2...
	I0202 16:29:08.592432   78714 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:29:08.592531   78714 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 16:29:08.592816   78714 out.go:304] Setting JSON to false
	I0202 16:29:08.620023   78714 start.go:112] hostinfo: {"hostname":"37309.local","uptime":28723,"bootTime":1643819425,"procs":366,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 16:29:08.620135   78714 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0202 16:29:08.647123   78714 out.go:176] * [functional-20220202162514-76172] minikube v1.25.1 on Darwin 11.2.3
	I0202 16:29:08.672655   78714 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 16:29:08.698797   78714 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 16:29:08.724789   78714 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0202 16:29:08.750567   78714 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 16:29:08.777810   78714 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 16:29:08.778354   78714 config.go:176] Loaded profile config "functional-20220202162514-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 16:29:08.799492   78714 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 16:29:08.985994   78714 docker.go:132] docker version: linux-20.10.6
	I0202 16:29:08.986181   78714 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:29:09.181051   78714 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 00:29:09.125263882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:29:09.227308   78714 out.go:176] * Using the docker driver based on existing profile
	I0202 16:29:09.227362   78714 start.go:281] selected driver: docker
	I0202 16:29:09.227377   78714 start.go:798] validating driver "docker" against &{Name:functional-20220202162514-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220202162514-76172 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:29:09.227482   78714 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0202 16:29:09.255341   78714 out.go:176] 
	W0202 16:29:09.255534   78714 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0202 16:29:09.281271   78714 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220202162514-76172 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (719.482712ms)

                                                
                                                
-- stdout --
	* [functional-20220202162514-76172] minikube v1.25.1 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 16:29:07.883881   78683 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:29:07.884056   78683 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:29:07.884062   78683 out.go:310] Setting ErrFile to fd 2...
	I0202 16:29:07.884065   78683 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:29:07.884190   78683 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 16:29:07.884450   78683 out.go:304] Setting JSON to false
	I0202 16:29:07.911398   78683 start.go:112] hostinfo: {"hostname":"37309.local","uptime":28722,"bootTime":1643819425,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0202 16:29:07.911521   78683 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0202 16:29:07.938130   78683 out.go:176] * [functional-20220202162514-76172] minikube v1.25.1 sur Darwin 11.2.3
	I0202 16:29:08.011132   78683 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 16:29:08.039081   78683 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 16:29:08.067092   78683 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0202 16:29:08.091958   78683 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 16:29:08.133943   78683 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 16:29:08.134481   78683 config.go:176] Loaded profile config "functional-20220202162514-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 16:29:08.134941   78683 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 16:29:08.243520   78683 docker.go:132] docker version: linux-20.10.6
	I0202 16:29:08.243720   78683 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 16:29:08.442671   78683 info.go:263] docker info: {ID:LVNT:MQD4:UDW3:UJT2:HLHX:4UTC:4NTE:52G5:6DGB:YSKS:CFIX:B23W Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-02-03 00:29:08.3813292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccom
p,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0202 16:29:08.469597   78683 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0202 16:29:08.469631   78683 start.go:281] selected driver: docker
	I0202 16:29:08.469646   78683 start.go:798] validating driver "docker" against &{Name:functional-20220202162514-76172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220202162514-76172 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 16:29:08.470671   78683 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0202 16:29:08.499233   78683 out.go:176] 
	W0202 16:29:08.499372   78683 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0202 16:29:08.524992   78683 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:873: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1549: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 addons list
functional_test.go:1561: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [67c05a57-0400-4d95-9030-a22a7f1ae94f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008471303s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220202162514-76172 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220202162514-76172 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220202162514-76172 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220202162514-76172 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220202162514-76172 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220202162514-76172 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220202162514-76172 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [5c9bd230-8c0d-43e2-a4e7-beef4c62e836] Pending
helpers_test.go:343: "sp-pod" [5c9bd230-8c0d-43e2-a4e7-beef4c62e836] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [5c9bd230-8c0d-43e2-a4e7-beef4c62e836] Running
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.014390767s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220202162514-76172 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220202162514-76172 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220202162514-76172 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [de50359e-b024-41c0-b0ee-2da66ff08fef] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [de50359e-b024-41c0-b0ee-2da66ff08fef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [de50359e-b024-41c0-b0ee-2da66ff08fef] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.009720372s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220202162514-76172 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1584: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1601: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh -n functional-20220202162514-76172 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 cp functional-20220202162514-76172:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_test855329587/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh -n functional-20220202162514-76172 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1653: (dbg) Run:  kubectl --context functional-20220202162514-76172 replace --force -f testdata/mysql.yaml
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-b87c45988-klhgl" [bea28e32-c8ba-4608-a71f-731ef49e08f0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-klhgl" [bea28e32-c8ba-4608-a71f-731ef49e08f0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.008646496s
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220202162514-76172 exec mysql-b87c45988-klhgl -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220202162514-76172 exec mysql-b87c45988-klhgl -- mysql -ppassword -e "show databases;": exit status 1 (130.055888ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220202162514-76172 exec mysql-b87c45988-klhgl -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220202162514-76172 exec mysql-b87c45988-klhgl -- mysql -ppassword -e "show databases;": exit status 1 (137.672855ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220202162514-76172 exec mysql-b87c45988-klhgl -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220202162514-76172 exec mysql-b87c45988-klhgl -- mysql -ppassword -e "show databases;": exit status 1 (123.129795ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220202162514-76172 exec mysql-b87c45988-klhgl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1789: Checking for existence of /etc/test/nested/copy/76172/hosts within VM
functional_test.go:1791: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo cat /etc/test/nested/copy/76172/hosts"
functional_test.go:1796: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1832: Checking for existence of /etc/ssl/certs/76172.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo cat /etc/ssl/certs/76172.pem"
functional_test.go:1832: Checking for existence of /usr/share/ca-certificates/76172.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo cat /usr/share/ca-certificates/76172.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1832: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1833: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1859: Checking for existence of /etc/ssl/certs/761722.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo cat /etc/ssl/certs/761722.pem"
functional_test.go:1859: Checking for existence of /usr/share/ca-certificates/761722.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo cat /usr/share/ca-certificates/761722.pem"
functional_test.go:1859: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220202162514-76172 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1887: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo systemctl is-active crio"
functional_test.go:1887: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo systemctl is-active crio": exit status 1 (620.391167ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220202162514-76172 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220202162514-76172 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [b41bfdce-b181-4581-ab79-736f08385eaf] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [b41bfdce-b181-4581-ab79-736f08385eaf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx-svc" [b41bfdce-b181-4581-ab79-736f08385eaf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.0225324s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220202162514-76172 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220202162514-76172 tunnel --alsologtostderr] ...
helpers_test.go:501: unable to terminate pid 78450: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1280: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1285: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1325: Took "676.510347ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1334: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1339: Took "70.426928ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1371: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: Took "723.561161ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1384: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1389: Took "107.344552ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220202162514-76172 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2485822090:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1643848145119417000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2485822090/created-by-test
functional_test_mount_test.go:110: wrote "test-1643848145119417000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2485822090/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1643848145119417000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2485822090/test-1643848145119417000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (798.643216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  3 00:29 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  3 00:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  3 00:29 test-1643848145119417000
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh cat /mount-9p/test-1643848145119417000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220202162514-76172 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [a45feb27-da6d-4209-a93c-394e1a832884] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [a45feb27-da6d-4209-a93c-394e1a832884] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [a45feb27-da6d-4209-a93c-394e1a832884] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.01410561s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220202162514-76172 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220202162514-76172 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2485822090:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 version -o=json --components
functional_test.go:2133: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 version -o=json --components: (1.340632503s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.2
k8s.gcr.io/kube-proxy:v1.23.2
k8s.gcr.io/kube-controller-manager:v1.23.2
k8s.gcr.io/kube-apiserver:v1.23.2
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220202162514-76172
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7                             | 0712d5dc1b147 | 448MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.2                         | 8a0228dd6a683 | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.2                         | 6114d758d6d16 | 53.5MB |
| k8s.gcr.io/kube-proxy                       | v1.23.2                         | d922ca3da64b3 | 112MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                         | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest                          | c316d5a335a5c | 142MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.2                         | 4783639ba7e03 | 125MB  |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                          | 7801cfc6d5c07 | 34.4MB |
| docker.io/localhost/my-image                | functional-20220202162514-76172 | dcfca9a8800e0 | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| docker.io/kubernetesui/dashboard            | v2.3.1                          | e1482a24335a6 | 220MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-20220202162514-76172 | e5a5965943f51 | 30B    |
| docker.io/library/nginx                     | alpine                          | bef258acf10dc | 23.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220202162514-76172 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format json:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"448000000"},{"id":"bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.2"],"size":"135000000"},{"id":"4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.2"],"size":"125000000"},{"id":"6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.2"],"size":"53500000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","re
poDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"e5a5965943f5178c4892a4e71b226ed03d61cf5aa3e900fb353458e4c797ccee","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220202162514-76172"],"size":"30"},{"id":"d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.2"],"size":"112000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220202162514-76172"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s
.gcr.io/pause:3.1"],"size":"742000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"dcfca9a8800e01861217816b9554060e8bfa8ec3fad831bbbe5e109be0125c8c","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220202162514-76172"],"size":"1240000"},{"id":"c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"1420000
00"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls --format yaml:
- id: 6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.2
size: "53500000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: e5a5965943f5178c4892a4e71b226ed03d61cf5aa3e900fb353458e4c797ccee
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220202162514-76172
size: "30"
- id: 4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.2
size: "125000000"
- id: d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.2
size: "112000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "448000000"
- id: c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.2
size: "135000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh pgrep buildkitd: exit status 1 (638.988606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image build -t localhost/my-image:functional-20220202162514-76172 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image build -t localhost/my-image:functional-20220202162514-76172 testdata/build: (4.905235415s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image build -t localhost/my-image:functional-20220202162514-76172 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c81cab88447e
Removing intermediate container c81cab88447e
---> 1d8b862f183a
Step 3/3 : ADD content.txt /
---> dcfca9a8800e
Successfully built dcfca9a8800e
Successfully tagged localhost/my-image:functional-20220202162514-76172
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.962490757s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220202162514-76172 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3000169522:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (658.398308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220202162514-76172 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3000169522:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh "sudo umount -f /mount-9p": exit status 1 (667.354531ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20220202162514-76172 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220202162514-76172 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3000169522:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172: (3.37965207s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172: (2.514615588s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220202162514-76172 docker-env) && out/minikube-darwin-amd64 status -p functional-20220202162514-76172"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220202162514-76172 docker-env) && out/minikube-darwin-amd64 status -p functional-20220202162514-76172": (1.558136134s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220202162514-76172 docker-env) && docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:518: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220202162514-76172 docker-env) && docker images": (1.069335842s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0202 16:29:25.764503   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.772955537s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172: (3.142566445s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1979: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1979: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1979: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image save gcr.io/google-containers/addon-resizer:functional-20220202162514-76172 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image save gcr.io/google-containers/addon-resizer:functional-20220202162514-76172 /Users/jenkins/workspace/addon-resizer-save.tar: (1.684980381s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image rm gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.161776925s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220202162514-76172 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220202162514-76172: (3.062636708s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.28s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220202162514-76172
--- PASS: TestFunctional/delete_addon-resizer_images (0.28s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220202162514-76172
--- PASS: TestFunctional/delete_my-image_image (0.12s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220202162514-76172
--- PASS: TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (138.11s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220202163007-76172 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0202 16:31:41.886066   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:32:09.635496   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220202163007-76172 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : (2m18.105130903s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (138.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220202163007-76172 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220202163007-76172 addons enable ingress --alsologtostderr -v=5: (16.416294065s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220202163007-76172 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (133.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220202163334-76172 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0202 16:33:35.270058   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:33:40.394179   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:33:50.640424   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:34:11.125735   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:34:52.096367   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220202163334-76172 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (2m13.625037813s)
--- PASS: TestJSONOutput/start/Command (133.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220202163334-76172 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 pause -p json-output-20220202163334-76172 --output=json --user=testUser: (1.573271144s)
--- PASS: TestJSONOutput/pause/Command (1.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220202163334-76172 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (18.29s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220202163334-76172 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220202163334-76172 --output=json --user=testUser: (18.28720729s)
--- PASS: TestJSONOutput/stop/Command (18.29s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.79s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220202163616-76172 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220202163616-76172 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (122.130507ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6b71fb2-043c-4880-85e8-8dd162ab9053","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220202163616-76172] minikube v1.25.1 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8d11762-26e5-4d2c-a9ea-613234ab5916","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"77ed8028-bf98-4f6c-8100-9e7502b97b87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig"}}
	{"specversion":"1.0","id":"17f02744-8977-4bfe-bf9e-a60403df2efc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"dbebbbbe-11e5-43d8-bb97-5c4158759ee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2d7b43f-fde9-4eca-9f8d-36d0903de64f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube"}}
	{"specversion":"1.0","id":"36eb54de-af12-4392-8dc4-fbe73afb3537","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220202163616-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220202163616-76172
--- PASS: TestErrorJSONOutput (0.79s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (96.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220202163616-76172 --network=
E0202 16:36:41.894124   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220202163616-76172 --network=: (1m21.71118273s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220202163616-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220202163616-76172
E0202 16:37:42.611704   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:42.618281   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:42.628534   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:42.649878   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:42.695219   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:42.776466   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:42.945216   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:43.265893   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:43.915907   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:45.200000   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:47.760830   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:37:52.881633   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220202163616-76172: (14.69627732s)
--- PASS: TestKicCustomNetwork/create_custom_network (96.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (84.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220202163753-76172 --network=bridge
E0202 16:38:03.122764   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:38:23.611387   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:38:30.119090   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:38:57.873679   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:39:04.576376   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220202163753-76172 --network=bridge: (1m13.40461069s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220202163753-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220202163753-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220202163753-76172: (10.734887473s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (84.26s)

                                                
                                    
x
+
TestKicExistingNetwork (98.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220202163923-76172 --network=existing-network
E0202 16:40:26.499953   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220202163923-76172 --network=existing-network: (1m16.997374267s)
helpers_test.go:176: Cleaning up "existing-network-20220202163923-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220202163923-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220202163923-76172: (14.774777824s)
--- PASS: TestKicExistingNetwork (98.15s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (54.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220202164055-76172 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0202 16:41:41.896990   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220202164055-76172 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (53.084629563s)
--- PASS: TestMountStart/serial/StartWithMountFirst (54.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220202164055-76172 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (55.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220202164055-76172 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
E0202 16:42:42.608936   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220202164055-76172 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (54.328548348s)
--- PASS: TestMountStart/serial/StartWithMountSecond (55.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220202164055-76172 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.62s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (13.51s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220202164055-76172 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220202164055-76172 --alsologtostderr -v=5: (13.50873517s)
--- PASS: TestMountStart/serial/DeleteFirst (13.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220202164055-76172 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.62s)

                                                
                                    
x
+
TestMountStart/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220202164055-76172
E0202 16:43:05.016425   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
mount_start_test.go:156: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220202164055-76172: (8.232756121s)
--- PASS: TestMountStart/serial/Stop (8.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (33.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220202164055-76172
E0202 16:43:10.347577   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:43:30.119928   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
mount_start_test.go:167: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220202164055-76172: (32.314162357s)
--- PASS: TestMountStart/serial/RestartStopped (33.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220202164055-76172 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (245.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220202164356-76172 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0202 16:46:41.909111   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 16:47:42.620620   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220202164356-76172 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (4m4.278328479s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr: (1.133325234s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (245.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.988534383s)
multinode_test.go:491: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- rollout status deployment/busybox: (5.694996625s)
multinode_test.go:497: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-gtl48 -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-vzftp -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-gtl48 -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-vzftp -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-gtl48 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-vzftp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-gtl48 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-gtl48 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-vzftp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220202164356-76172 -- exec busybox-7978565885-vzftp -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (119.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220202164356-76172 -v 3 --alsologtostderr
E0202 16:48:30.136967   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:49:53.252211   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220202164356-76172 -v 3 --alsologtostderr: (1m57.825213741s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr: (1.591220977s)
--- PASS: TestMultiNode/serial/AddNode (119.42s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (23.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --output json --alsologtostderr: (1.575401655s)
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp testdata/cp-test.txt multinode-20220202164356-76172:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test1368572388/cp-test_multinode-20220202164356-76172.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172:/home/docker/cp-test.txt multinode-20220202164356-76172-m02:/home/docker/cp-test_multinode-20220202164356-76172_multinode-20220202164356-76172-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m02 "sudo cat /home/docker/cp-test_multinode-20220202164356-76172_multinode-20220202164356-76172-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172:/home/docker/cp-test.txt multinode-20220202164356-76172-m03:/home/docker/cp-test_multinode-20220202164356-76172_multinode-20220202164356-76172-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m03 "sudo cat /home/docker/cp-test_multinode-20220202164356-76172_multinode-20220202164356-76172-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp testdata/cp-test.txt multinode-20220202164356-76172-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test1368572388/cp-test_multinode-20220202164356-76172-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m02:/home/docker/cp-test.txt multinode-20220202164356-76172:/home/docker/cp-test_multinode-20220202164356-76172-m02_multinode-20220202164356-76172.txt
helpers_test.go:555: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m02:/home/docker/cp-test.txt multinode-20220202164356-76172:/home/docker/cp-test_multinode-20220202164356-76172-m02_multinode-20220202164356-76172.txt: (1.003256499s)
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172 "sudo cat /home/docker/cp-test_multinode-20220202164356-76172-m02_multinode-20220202164356-76172.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m02:/home/docker/cp-test.txt multinode-20220202164356-76172-m03:/home/docker/cp-test_multinode-20220202164356-76172-m02_multinode-20220202164356-76172-m03.txt
helpers_test.go:555: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m02:/home/docker/cp-test.txt multinode-20220202164356-76172-m03:/home/docker/cp-test_multinode-20220202164356-76172-m02_multinode-20220202164356-76172-m03.txt: (1.000015667s)
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m03 "sudo cat /home/docker/cp-test_multinode-20220202164356-76172-m02_multinode-20220202164356-76172-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp testdata/cp-test.txt multinode-20220202164356-76172-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test1368572388/cp-test_multinode-20220202164356-76172-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m03:/home/docker/cp-test.txt multinode-20220202164356-76172:/home/docker/cp-test_multinode-20220202164356-76172-m03_multinode-20220202164356-76172.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172 "sudo cat /home/docker/cp-test_multinode-20220202164356-76172-m03_multinode-20220202164356-76172.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m03:/home/docker/cp-test.txt multinode-20220202164356-76172-m02:/home/docker/cp-test_multinode-20220202164356-76172-m03_multinode-20220202164356-76172-m02.txt
helpers_test.go:555: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 cp multinode-20220202164356-76172-m03:/home/docker/cp-test.txt multinode-20220202164356-76172-m02:/home/docker/cp-test_multinode-20220202164356-76172-m03_multinode-20220202164356-76172-m02.txt: (1.004058224s)
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 ssh -n multinode-20220202164356-76172-m02 "sudo cat /home/docker/cp-test_multinode-20220202164356-76172-m03_multinode-20220202164356-76172-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (23.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (11.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 node stop m03: (8.820776233s)
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status: exit status 7 (1.26341113s)

                                                
                                                
-- stdout --
	multinode-20220202164356-76172
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220202164356-76172-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220202164356-76172-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr: exit status 7 (1.239409854s)

                                                
                                                
-- stdout --
	multinode-20220202164356-76172
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220202164356-76172-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220202164356-76172-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 16:50:45.586034   82959 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:50:45.586160   82959 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:50:45.586166   82959 out.go:310] Setting ErrFile to fd 2...
	I0202 16:50:45.586169   82959 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:50:45.586239   82959 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 16:50:45.586398   82959 out.go:304] Setting JSON to false
	I0202 16:50:45.586413   82959 mustload.go:65] Loading cluster: multinode-20220202164356-76172
	I0202 16:50:45.586664   82959 config.go:176] Loaded profile config "multinode-20220202164356-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 16:50:45.586676   82959 status.go:253] checking status of multinode-20220202164356-76172 ...
	I0202 16:50:45.587024   82959 cli_runner.go:133] Run: docker container inspect multinode-20220202164356-76172 --format={{.State.Status}}
	I0202 16:50:45.705667   82959 status.go:328] multinode-20220202164356-76172 host status = "Running" (err=<nil>)
	I0202 16:50:45.705700   82959 host.go:66] Checking if "multinode-20220202164356-76172" exists ...
	I0202 16:50:45.706033   82959 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220202164356-76172
	I0202 16:50:45.823885   82959 host.go:66] Checking if "multinode-20220202164356-76172" exists ...
	I0202 16:50:45.824149   82959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 16:50:45.824213   82959 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220202164356-76172
	I0202 16:50:45.940580   82959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65181 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/multinode-20220202164356-76172/id_rsa Username:docker}
	I0202 16:50:46.033652   82959 ssh_runner.go:195] Run: systemctl --version
	I0202 16:50:46.038344   82959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 16:50:46.047532   82959 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220202164356-76172
	I0202 16:50:46.165543   82959 kubeconfig.go:92] found "multinode-20220202164356-76172" server: "https://127.0.0.1:65186"
	I0202 16:50:46.165568   82959 api_server.go:165] Checking apiserver status ...
	I0202 16:50:46.165609   82959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0202 16:50:46.181040   82959 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1736/cgroup
	I0202 16:50:46.189080   82959 api_server.go:181] apiserver freezer: "7:freezer:/docker/a02dc216b0a21c86864d7575c3d8b360aef06564473f4a13a6fe7fcae891b55e/kubepods/burstable/pod439c0296bc3b943bdb2fc0038880a596/532537b83fb9c67710d7434bfdbcbc745f28891591504f8eec04bb4280b57319"
	I0202 16:50:46.189161   82959 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a02dc216b0a21c86864d7575c3d8b360aef06564473f4a13a6fe7fcae891b55e/kubepods/burstable/pod439c0296bc3b943bdb2fc0038880a596/532537b83fb9c67710d7434bfdbcbc745f28891591504f8eec04bb4280b57319/freezer.state
	I0202 16:50:46.196797   82959 api_server.go:203] freezer state: "THAWED"
	I0202 16:50:46.196816   82959 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:65186/healthz ...
	I0202 16:50:46.202596   82959 api_server.go:266] https://127.0.0.1:65186/healthz returned 200:
	ok
	I0202 16:50:46.202609   82959 status.go:419] multinode-20220202164356-76172 apiserver status = Running (err=<nil>)
	I0202 16:50:46.202617   82959 status.go:255] multinode-20220202164356-76172 status: &{Name:multinode-20220202164356-76172 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0202 16:50:46.202634   82959 status.go:253] checking status of multinode-20220202164356-76172-m02 ...
	I0202 16:50:46.202911   82959 cli_runner.go:133] Run: docker container inspect multinode-20220202164356-76172-m02 --format={{.State.Status}}
	I0202 16:50:46.320604   82959 status.go:328] multinode-20220202164356-76172-m02 host status = "Running" (err=<nil>)
	I0202 16:50:46.320630   82959 host.go:66] Checking if "multinode-20220202164356-76172-m02" exists ...
	I0202 16:50:46.321468   82959 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220202164356-76172-m02
	I0202 16:50:46.441733   82959 host.go:66] Checking if "multinode-20220202164356-76172-m02" exists ...
	I0202 16:50:46.441987   82959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 16:50:46.442047   82959 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220202164356-76172-m02
	I0202 16:50:46.558838   82959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65519 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/multinode-20220202164356-76172-m02/id_rsa Username:docker}
	I0202 16:50:46.650746   82959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 16:50:46.660532   82959 status.go:255] multinode-20220202164356-76172-m02 status: &{Name:multinode-20220202164356-76172-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0202 16:50:46.660553   82959 status.go:253] checking status of multinode-20220202164356-76172-m03 ...
	I0202 16:50:46.660848   82959 cli_runner.go:133] Run: docker container inspect multinode-20220202164356-76172-m03 --format={{.State.Status}}
	I0202 16:50:46.781645   82959 status.go:328] multinode-20220202164356-76172-m03 host status = "Stopped" (err=<nil>)
	I0202 16:50:46.781672   82959 status.go:341] host is not running, skipping remaining checks
	I0202 16:50:46.781677   82959 status.go:255] multinode-20220202164356-76172-m03 status: &{Name:multinode-20220202164356-76172-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (11.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 node start m03 --alsologtostderr: (53.012909335s)
multinode_test.go:266: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status
multinode_test.go:266: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status: (1.592409931s)
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (54.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (266.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220202164356-76172
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220202164356-76172
E0202 16:51:41.915908   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220202164356-76172: (46.151756379s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220202164356-76172 --wait=true -v=8 --alsologtostderr
E0202 16:52:42.623034   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:53:30.137717   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 16:54:05.722895   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220202164356-76172 --wait=true -v=8 --alsologtostderr: (3m39.853326186s)
multinode_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220202164356-76172
--- PASS: TestMultiNode/serial/RestartKeepsNodes (266.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 node delete m03: (15.120053669s)
multinode_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr
multinode_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr: (1.127268913s)
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:429: (dbg) Done: kubectl get nodes: (1.770441907s)
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (37.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 stop
E0202 16:56:41.923349   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
multinode_test.go:319: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 stop: (36.905118569s)
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status: exit status 7 (272.101919ms)

                                                
                                                
-- stdout --
	multinode-20220202164356-76172
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220202164356-76172-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr: exit status 7 (279.526876ms)

                                                
                                                
-- stdout --
	multinode-20220202164356-76172
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220202164356-76172-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 16:57:03.068914   83852 out.go:297] Setting OutFile to fd 1 ...
	I0202 16:57:03.069051   83852 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:57:03.069056   83852 out.go:310] Setting ErrFile to fd 2...
	I0202 16:57:03.069059   83852 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 16:57:03.069135   83852 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 16:57:03.069306   83852 out.go:304] Setting JSON to false
	I0202 16:57:03.069322   83852 mustload.go:65] Loading cluster: multinode-20220202164356-76172
	I0202 16:57:03.069602   83852 config.go:176] Loaded profile config "multinode-20220202164356-76172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 16:57:03.069613   83852 status.go:253] checking status of multinode-20220202164356-76172 ...
	I0202 16:57:03.069967   83852 cli_runner.go:133] Run: docker container inspect multinode-20220202164356-76172 --format={{.State.Status}}
	I0202 16:57:03.189153   83852 status.go:328] multinode-20220202164356-76172 host status = "Stopped" (err=<nil>)
	I0202 16:57:03.189181   83852 status.go:341] host is not running, skipping remaining checks
	I0202 16:57:03.189187   83852 status.go:255] multinode-20220202164356-76172 status: &{Name:multinode-20220202164356-76172 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0202 16:57:03.189219   83852 status.go:253] checking status of multinode-20220202164356-76172-m02 ...
	I0202 16:57:03.189510   83852 cli_runner.go:133] Run: docker container inspect multinode-20220202164356-76172-m02 --format={{.State.Status}}
	I0202 16:57:03.302923   83852 status.go:328] multinode-20220202164356-76172-m02 host status = "Stopped" (err=<nil>)
	I0202 16:57:03.302944   83852 status.go:341] host is not running, skipping remaining checks
	I0202 16:57:03.302948   83852 status.go:255] multinode-20220202164356-76172-m02 status: &{Name:multinode-20220202164356-76172-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (37.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (147.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220202164356-76172 --wait=true -v=8 --alsologtostderr --driver=docker 
E0202 16:57:42.628569   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 16:58:30.140250   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220202164356-76172 --wait=true -v=8 --alsologtostderr --driver=docker : (2m24.015189427s)
multinode_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr
multinode_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220202164356-76172 status --alsologtostderr: (1.14094719s)
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:379: (dbg) Done: kubectl get nodes: (1.793096025s)
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (147.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (107.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220202164356-76172
multinode_test.go:457: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220202164356-76172-m02 --driver=docker 
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220202164356-76172-m02 --driver=docker : exit status 14 (328.122235ms)

                                                
                                                
-- stdout --
	* [multinode-20220202164356-76172-m02] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220202164356-76172-m02' is duplicated with machine name 'multinode-20220202164356-76172-m02' in profile 'multinode-20220202164356-76172'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220202164356-76172-m03 --driver=docker 
E0202 16:59:45.046059   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220202164356-76172-m03 --driver=docker : (1m28.700301124s)
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220202164356-76172
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220202164356-76172: exit status 80 (625.567623ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220202164356-76172
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220202164356-76172-m03 already exists in multinode-20220202164356-76172-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220202164356-76172-m03
multinode_test.go:477: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220202164356-76172-m03: (17.942116413s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (107.64s)

                                                
                                    
x
+
TestPreload (222.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220202170143-76172 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0202 17:02:42.648556   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 17:03:30.168784   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220202170143-76172 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m31.52978515s)
preload_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220202170143-76172 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20220202170143-76172 -- docker pull gcr.io/k8s-minikube/busybox: (4.688733198s)
preload_test.go:72: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220202170143-76172 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220202170143-76172 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (50.350951365s)
preload_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220202170143-76172 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20220202170143-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220202170143-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220202170143-76172: (14.908575086s)
--- PASS: TestPreload (222.15s)

                                                
                                    
x
+
TestScheduledStopUnix (162.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220202170525-76172 --memory=2048 --driver=docker 
E0202 17:06:33.289111   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
E0202 17:06:41.940902   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
scheduled_stop_test.go:129: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220202170525-76172 --memory=2048 --driver=docker : (1m23.129567317s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220202170525-76172 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220202170525-76172 -n scheduled-stop-20220202170525-76172
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220202170525-76172 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220202170525-76172 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220202170525-76172 -n scheduled-stop-20220202170525-76172
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220202170525-76172
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220202170525-76172 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E0202 17:07:42.655937   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220202170525-76172
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220202170525-76172: exit status 7 (155.787034ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220202170525-76172
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220202170525-76172 -n scheduled-stop-20220202170525-76172
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220202170525-76172 -n scheduled-stop-20220202170525-76172: exit status 7 (153.588494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220202170525-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220202170525-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220202170525-76172: (6.959225383s)
--- PASS: TestScheduledStopUnix (162.76s)

                                                
                                    
x
+
TestSkaffold (132.92s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4285548420 version
skaffold_test.go:61: skaffold version: v1.35.2
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220202170808-76172 --memory=2600 --driver=docker 
E0202 17:08:30.175147   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220202170808-76172 --memory=2600 --driver=docker : (1m24.091662593s)
skaffold_test.go:84: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:108: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4285548420 run --minikube-profile skaffold-20220202170808-76172 --kube-context skaffold-20220202170808-76172 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4285548420 run --minikube-profile skaffold-20220202170808-76172 --kube-context skaffold-20220202170808-76172 --status-check=true --port-forward=false --interactive=false: (22.003671601s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-654cf8b99b-89d49" [207e0c2e-1f51-4ff9-9ec8-f19033432480] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012453131s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-5945f8cb9-4pcrn" [ede53787-a0bc-4568-ac5f-2ded984aaede] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008038264s
helpers_test.go:176: Cleaning up "skaffold-20220202170808-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220202170808-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220202170808-76172: (15.313045938s)
--- PASS: TestSkaffold (132.92s)

                                                
                                    
x
+
TestInsufficientStorage (72.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220202171021-76172 --memory=2048 --output=json --wait=true --driver=docker 
E0202 17:10:45.766989   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220202171021-76172 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (57.556510458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d52db3b6-53a5-4803-a01d-4aa0899b9b3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220202171021-76172] minikube v1.25.1 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76ddb7dd-fa03-44a9-a59a-501bfffbdd31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"47328c27-62b2-44d6-a69b-289ce5cbdc6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig"}}
	{"specversion":"1.0","id":"7a8159a5-4f59-4390-b870-47767fcf93f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"1784d1e7-f466-42db-83ae-7d5094f35037","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fe323c39-2c9f-4439-a123-b90e33a6c8f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube"}}
	{"specversion":"1.0","id":"d665cae3-0d67-481d-842b-ee5778fbc394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"217f8af5-065e-45d9-bf60-a8af754b4b27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2eb25e16-52bc-4dd5-8965-6775150e0bb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220202171021-76172 in cluster insufficient-storage-20220202171021-76172","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd45eaf8-a8d9-40d4-a917-666f3ea4e271","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1920de50-7903-4e79-a844-851812bd7978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"94ad1c73-817a-498b-bad9-dde89e84c0ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220202171021-76172 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220202171021-76172 --output=json --layout=cluster: exit status 7 (621.424233ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220202171021-76172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220202171021-76172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0202 17:11:19.496742   85945 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220202171021-76172" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220202171021-76172 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220202171021-76172 --output=json --layout=cluster: exit status 7 (617.113375ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220202171021-76172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220202171021-76172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0202 17:11:20.114128   85962 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220202171021-76172" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	E0202 17:11:20.125277   85962 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/insufficient-storage-20220202171021-76172/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220202171021-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220202171021-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220202171021-76172: (13.615455224s)
--- PASS: TestInsufficientStorage (72.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (125.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1827790097.exe start -p running-upgrade-20220202172001-76172 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1827790097.exe start -p running-upgrade-20220202172001-76172 --memory=2200 --vm-driver=docker : (1m12.703729685s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20220202172001-76172 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0202 17:21:41.961215   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20220202172001-76172 --memory=2200 --alsologtostderr -v=1 --driver=docker : (44.01392559s)
helpers_test.go:176: Cleaning up "running-upgrade-20220202172001-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220202172001-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220202172001-76172: (8.055867889s)
--- PASS: TestRunningBinaryUpgrade (125.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (220.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : (1m8.201476417s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220202172207-76172
E0202 17:23:30.188428   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220202172207-76172: (18.645698722s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220202172207-76172 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220202172207-76172 status --format={{.Host}}: exit status 7 (166.005776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker : (1m53.510809911s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220202172207-76172 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (341.142191ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220202172207-76172] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.3-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220202172207-76172
	    minikube start -p kubernetes-upgrade-20220202172207-76172 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220202172207-761722 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.3-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220202172207-76172 --kubernetes-version=v1.23.3-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220202172207-76172 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker : (13.640797728s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220202172207-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220202172207-76172
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220202172207-76172: (5.503444034s)
--- PASS: TestKubernetesUpgrade (220.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (197.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1104173679.exe start -p missing-upgrade-20220202172030-76172 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1104173679.exe start -p missing-upgrade-20220202172030-76172 --memory=2200 --driver=docker : (1m22.825154391s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220202172030-76172

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220202172030-76172: (17.028646858s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220202172030-76172
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20220202172030-76172 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0202 17:22:42.685142   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
E0202 17:23:13.312111   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20220202172030-76172 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m20.051917357s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220202172030-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220202172030-76172

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220202172030-76172: (15.587596793s)
--- PASS: TestMissingContainerUpgrade (197.12s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.58s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.25.1 on darwin
- MINIKUBE_LOCATION=13251
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current3516219116
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current3516219116/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current3516219116/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current3516219116/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
E0202 17:11:41.949052   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.58s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.55s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.25.1 on darwin
- MINIKUBE_LOCATION=13251
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current3171388012
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current3171388012/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current3171388012/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current3171388012/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.816937277.exe start -p stopped-upgrade-20220202172347-76172 --memory=2200 --vm-driver=docker 
E0202 17:24:56.013719   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.816937277.exe start -p stopped-upgrade-20220202172347-76172 --memory=2200 --vm-driver=docker : (1m16.672073478s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.816937277.exe -p stopped-upgrade-20220202172347-76172 stop
version_upgrade_test.go:199: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.816937277.exe -p stopped-upgrade-20220202172347-76172 stop: (13.852063212s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-20220202172347-76172 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-20220202172347-76172 --memory=2200 --alsologtostderr -v=1 --driver=docker : (52.464626085s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.99s)

                                                
                                    
x
+
TestPause/serial/Start (111.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220202172547-76172 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220202172547-76172 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m51.093274153s)
--- PASS: TestPause/serial/Start (111.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220202172347-76172
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220202172347-76172: (2.823549479s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (317.929938ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220202172618-76172] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (60.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --driver=docker 
E0202 17:26:41.974163   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
no_kubernetes_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --driver=docker : (1m0.311841934s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220202172618-76172 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (60.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --no-kubernetes --driver=docker 
E0202 17:27:25.790898   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
no_kubernetes_test.go:113: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --no-kubernetes --driver=docker : (14.75144918s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220202172618-76172 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220202172618-76172 status -o json: exit status 2 (645.153523ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220202172618-76172","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220202172618-76172

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:125: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220202172618-76172: (13.677477234s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220202172547-76172 --alsologtostderr -v=1 --driver=docker 
E0202 17:27:42.687435   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
pause_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220202172547-76172 --alsologtostderr -v=1 --driver=docker : (7.845331998s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220202172547-76172 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.65s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220202172547-76172 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220202172547-76172 --output=json --layout=cluster: exit status 2 (648.271728ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220202172547-76172","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220202172547-76172","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.65s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20220202172547-76172 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (37.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --no-kubernetes --driver=docker : (37.855462736s)
--- PASS: TestNoKubernetes/serial/Start (37.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220202172547-76172 --alsologtostderr -v=5
pause_test.go:111: (dbg) Done: out/minikube-darwin-amd64 pause -p pause-20220202172547-76172 --alsologtostderr -v=5: (1.021560108s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (15.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20220202172547-76172 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20220202172547-76172 --alsologtostderr -v=5: (15.501813349s)
--- PASS: TestPause/serial/DeletePaused (15.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.88s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:143: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (5.529918403s)
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220202172547-76172
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220202172547-76172: exit status 1 (113.699547ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220202172547-76172

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (5.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (1m33.805918583s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220202172618-76172 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220202172618-76172 "sudo systemctl is-active --quiet service kubelet": exit status 1 (650.062608ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:170: (dbg) Done: out/minikube-darwin-amd64 profile list: (1.08189067s)
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:180: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (1.141151886s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220202172618-76172
E0202 17:28:30.196313   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
no_kubernetes_test.go:159: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220202172618-76172: (2.02524516s)
--- PASS: TestNoKubernetes/serial/Stop (2.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (16.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --driver=docker 
no_kubernetes_test.go:192: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220202172618-76172 --driver=docker : (16.931439485s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (16.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220202172618-76172 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220202172618-76172 "sudo systemctl is-active --quiet service kubelet": exit status 1 (618.304569ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (115.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m55.604033367s)
--- PASS: TestNetworkPlugins/group/false/Start (115.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220202171133-76172 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220202171133-76172 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context auto-20220202171133-76172 replace --force -f testdata/netcat-deployment.yaml: (1.883830549s)
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-dhvp4" [aded0c93-956e-46d6-bd31-0b5c62843793] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0202 17:29:56.013421   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-dhvp4" [aded0c93-956e-46d6-bd31-0b5c62843793] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.016199148s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220202171133-76172 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220202171133-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220202171133-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20220202171133-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.131390518s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (128.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (2m8.251534848s)
--- PASS: TestNetworkPlugins/group/cilium/Start (128.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220202171134-76172 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220202171134-76172 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context false-20220202171134-76172 replace --force -f testdata/netcat-deployment.yaml: (1.932713616s)
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-mzg9j" [3c19f6c3-936a-4f7a-8ea5-5b0cd19932d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-mzg9j" [3c19f6c3-936a-4f7a-8ea5-5b0cd19932d0] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.007800891s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220202171134-76172 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20220202171134-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20220202171134-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0202 17:31:19.140286   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20220202171134-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.142407114s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-t6fns" [db6ea5bc-9a27-402d-928b-79ad813e4d03] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.013875397s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220202171134-76172 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (16.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220202171134-76172 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context cilium-20220202171134-76172 replace --force -f testdata/netcat-deployment.yaml: (3.317274039s)
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-zzk8h" [9dc6e634-3cde-4921-80c0-8aefb5611cf8] Pending
helpers_test.go:343: "netcat-668db85669-zzk8h" [9dc6e634-3cde-4921-80c0-8aefb5611cf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0202 17:32:42.693213   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-zzk8h" [9dc6e634-3cde-4921-80c0-8aefb5611cf8] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 13.014028853s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (16.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220202171134-76172 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220202171134-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220202171134-76172 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (69.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
E0202 17:33:05.116686   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 17:33:30.203723   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202162514-76172/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p custom-weave-20220202171134-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : (1m9.728883048s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (69.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-weave-20220202171134-76172 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (16.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20220202171134-76172 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context custom-weave-20220202171134-76172 replace --force -f testdata/netcat-deployment.yaml: (2.035179639s)
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-dhwsf" [8e78e79a-4355-4903-8934-3c4db9599bcd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-dhwsf" [8e78e79a-4355-4903-8934-3c4db9599bcd] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 14.011285767s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (16.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (60.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0202 17:34:48.846318   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:48.851463   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:48.861587   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:48.882217   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:48.922460   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:49.076892   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:49.243396   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:49.568417   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:50.208624   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:51.494651   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:54.055930   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:34:56.027161   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202170808-76172/client.crt: no such file or directory
E0202 17:34:59.179812   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:35:09.427403   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
E0202 17:35:29.908721   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/auto-20220202171133-76172/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (1m0.829037479s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (60.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220202171133-76172 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220202171133-76172 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context enable-default-cni-20220202171133-76172 replace --force -f testdata/netcat-deployment.yaml: (2.019890811s)
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-p5xf4" [3d4529c0-87a4-42c2-b9ab-62edc56ef2fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-p5xf4" [3d4529c0-87a4-42c2-b9ab-62edc56ef2fa] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.011453582s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E0202 17:41:41.991979   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202161336-76172/client.crt: no such file or directory
E0202 17:41:58.129115   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/custom-weave-20220202171134-76172/client.crt: no such file or directory
E0202 17:42:28.388355   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
E0202 17:42:42.706399   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202163007-76172/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220202171133-76172 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m12.363442736s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220202171133-76172 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (18.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220202171133-76172 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context bridge-20220202171133-76172 replace --force -f testdata/netcat-deployment.yaml: (1.913904933s)
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-fwpth" [c589de8d-52e5-45fc-bffc-53847586075d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0202 17:42:56.073483   76172 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-75361-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/cilium-20220202171134-76172/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-fwpth" [c589de8d-52e5-45fc-bffc-53847586075d] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 17.007553149s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (18.95s)

                                                
                                    

Test skip (19/227)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 14.354404ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-2flh5" [6e619784-6af9-41ef-a4ef-5024d805ac23] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015892986s
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-n52x4" [b425d15d-9d83-4418-9ed5-21c8e923628e] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018426024s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220202161336-76172 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220202161336-76172 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:296: (dbg) Done: kubectl --context addons-20220202161336-76172 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.772323147s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.89s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220202161336-76172 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220202161336-76172 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220202161336-76172 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [220cb888-9085-4602-9707-a512f0da00ff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [220cb888-9085-4602-9707-a512f0da00ff] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010712294s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220202161336-76172 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.72s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-20220202162514-76172 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-20220202162514-76172 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-24xrw" [19b7cbab-7629-4945-b030-d41a468aa82f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-24xrw" [19b7cbab-7629-4945-b030-d41a468aa82f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.015532139s
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220202162514-76172 service list
functional_test.go:1464: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (12.87s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220202163007-76172 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.109484404s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (199.013324ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.105.101.137:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (157.387961ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.105.101.137:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (157.824224ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.105.101.137:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (157.884861ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.105.101.137:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (161.183717ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.105.101.137:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202163007-76172 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [2fc5b3cd-1aef-4e18-942a-66c76ed635df] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [2fc5b3cd-1aef-4e18-942a-66c76ed635df] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.015393978s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220202163007-76172 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.80s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220202171133-76172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220202171133-76172
--- SKIP: TestNetworkPlugins/group/flannel (0.85s)

                                                
                                    
Copied to clipboard