Test Report: Docker_Linux 13639

                    
                      60328d4d40a11ac7c18c6243f597bcfbb3050148:2022-05-11:23896
                    
                

Test fail (8/283)

x
+
TestFunctional/serial/ComponentHealth (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-20220511225632-7294 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-05-11 22:56:56 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc00000fa88 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006e8310} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-apiserver:v1.23.5 ImageID:docker-pullable://k8s.gcr.io/kube-apiserver@sha256:ddf5bf7196eb534271f9e5d403f4da19838d5610bb5ca191001bde5f32b5492e ContainerID:docker://640a51c74016597821125f5f85706eddec23ca6c17e7dc241da0fd1e1f46302c}]}
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220511225632-7294
helpers_test.go:235: (dbg) docker inspect functional-20220511225632-7294:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "297a9b29d75379ae614ab55c9b88719b8a750dbd7b65f91c436e1ceedcd73700",
	        "Created": "2022-05-11T22:56:40.619241008Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29659,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-11T22:56:40.989966832Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/297a9b29d75379ae614ab55c9b88719b8a750dbd7b65f91c436e1ceedcd73700/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/297a9b29d75379ae614ab55c9b88719b8a750dbd7b65f91c436e1ceedcd73700/hostname",
	        "HostsPath": "/var/lib/docker/containers/297a9b29d75379ae614ab55c9b88719b8a750dbd7b65f91c436e1ceedcd73700/hosts",
	        "LogPath": "/var/lib/docker/containers/297a9b29d75379ae614ab55c9b88719b8a750dbd7b65f91c436e1ceedcd73700/297a9b29d75379ae614ab55c9b88719b8a750dbd7b65f91c436e1ceedcd73700-json.log",
	        "Name": "/functional-20220511225632-7294",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220511225632-7294:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220511225632-7294",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/483b0956a4e375533cceb20533b79b8ea13606751224172f6843ef594e2d9e57-init/diff:/var/lib/docker/overlay2/481ef5eb4df205b13d647c107a3101bb0cfb2ac6238024ebbdc415acba840ac3/diff:/var/lib/docker/overlay2/44ef5ffd67acceb963fbf4cdcde72e27eaf67db91720a5686c47a2ae719809f9/diff:/var/lib/docker/overlay2/7ff54885d7e73b28c072dd4c473d278ed1466486f2f98179ee07e6c6422805c9/diff:/var/lib/docker/overlay2/d0b295cb8ada7da56d6549b14ca3e7b6a9e2afc0be8e503095107cac74d2a3f7/diff:/var/lib/docker/overlay2/3fdb692340656f907b8a43439a8391a2e69c4a45237c37e1d60c8ab8b18134de/diff:/var/lib/docker/overlay2/ce96c5d9236d7dbf050f4a26d46beafb112fcdb1b99fd0e59aab0bf3b193fb31/diff:/var/lib/docker/overlay2/362cc1c81285daac3e4db5af5bf8bbb2629e6523f1eb8fc17820bea7a6d9baf6/diff:/var/lib/docker/overlay2/aeb2974007b88ff3d614b4625092ea83c3c6078ba4a81d6d9f963be18e81fe69/diff:/var/lib/docker/overlay2/618b7d0e6c402c44813c15391c811c50273ecca1c7afa805dc1154ac15783fd8/diff:/var/lib/docker/overlay2/518382
718f741ff88f2a15b4742b5269f766985211c081e1294953249c9d2f18/diff:/var/lib/docker/overlay2/29c1818c997ca6b4d1669cb9fbf0b9e9952cb5a4f75318dc83610d05108109e7/diff:/var/lib/docker/overlay2/4ca08ab854a5bc4ea21b75f275f1abdadb03b79d8d316cb22c34eebb9d7db763/diff:/var/lib/docker/overlay2/d01a62458c8e9ffd29f4eb55dea1bf7e3b9f40b2cf97aac0dc26e6905158a6e1/diff:/var/lib/docker/overlay2/df8cbcf60376f50c16dc844babd34f14f0025fb70e372d63939b7843aaaf573a/diff:/var/lib/docker/overlay2/9841b2c5577feffb0829804aae929612848d09bff6890f646fef522fff253805/diff:/var/lib/docker/overlay2/dd179b3df1d8d5cb134817000dbac174dda79dcba018a0e1463092ab3bee5917/diff:/var/lib/docker/overlay2/ab000a8fb4aab002b6741795a30b9f4a2f4a9991a55186a5c33224b5c645dbd9/diff:/var/lib/docker/overlay2/3c9e5cab81e8274fae5913e603b8af09571cdeb02c5901d3a05c3266295c1a5f/diff:/var/lib/docker/overlay2/2f020f41c690b4ba78947fd0a89c7cd02e66039c58d08826e064bf3c73c1a235/diff:/var/lib/docker/overlay2/6b6ece698eef5a9aff115bcb30a9dd6a7c45a281e1782b655b5ca871e91cc39b/diff:/var/lib/d
ocker/overlay2/d9cff6a065f8a5c1ebddb19d216253876ced33e502d362c8283655902a4e6a18/diff:/var/lib/docker/overlay2/55476b4128c7c3982f852fd6806bb6fbb16f54fbd0be9d96233867c13fc6e4af/diff:/var/lib/docker/overlay2/e6a57691d6921e1675f93cee749cc18b4b353f7adbd14d05af0da48cf32cddba/diff:/var/lib/docker/overlay2/0c19aee5e4c0dfebe55be9d7eb972a0dcc84ce9a59283396e5d0213269b405e1/diff:/var/lib/docker/overlay2/5f1b35290c6d86412de46be9df212ef0c94759a5eaa6519e24944f9d040cb6d5/diff:/var/lib/docker/overlay2/a2c96b37966fd7143034837aa05c57d00661838f1f42c9572899bd94d5bfec2d/diff:/var/lib/docker/overlay2/83aeec0e301d1fcfc1f247abd5e9d59c1cce47d9836550601333c482862eb3c4/diff:/var/lib/docker/overlay2/e12c190d34c775f1c32c2b64baa66a78fbdc83dbfa90ce6d7bd58c09ded96d69/diff:/var/lib/docker/overlay2/3983af7d86faf879b24e2a4906a9946e109a79baef58e4c208852845cce2a82a/diff:/var/lib/docker/overlay2/d11872013c6cbd6629a95f12c5e9d792b8c8f3a2d554398703c9d79881201733/diff:/var/lib/docker/overlay2/8764f20fe82fa0293ca9d0a908b3a74b268bc0eb064fa5db25d594c9099
dc7a9/diff:/var/lib/docker/overlay2/b998d9c59e72d7a6a39120be33a9c3929dcec3a21ce94525c7eb0026a20866ba/diff:/var/lib/docker/overlay2/ca0bfa3e2e36eeb161f4e2d4af49743810843aec10f16499ff3965530201431a/diff:/var/lib/docker/overlay2/7fba6225316b01ff4b801ac40682a3793ee00e3bdfd842aab2846f7c617f7e25/diff:/var/lib/docker/overlay2/82bcd40ee054fc1e5a22d92cb6bc25ef0b1aa8db3ed611fc2d061d487821d4f2/diff:/var/lib/docker/overlay2/a6344600b434bec2a336e523b24640486430591b0f091ab261fb991ffad5b728/diff:/var/lib/docker/overlay2/2627822a91f93e2419dda6670baae5f2d4643ed0ff2053b2326e6ce946e4f47b/diff:/var/lib/docker/overlay2/f1a3997c73ab9f38b321d361e56d041da5d5eebf5c95a4d5193e9269be85c82c/diff:/var/lib/docker/overlay2/a9432d575a3e1768603824c7f970bdca3828d2be7a0f213f8b5cda4106c3f9cf/diff:/var/lib/docker/overlay2/fa10e95b75bb4119664377fe1dbdbe3ba9905415b00e4756dc283d7fe361d3c0/diff:/var/lib/docker/overlay2/b15d5e2ea2d5aaffaeb03b448f01601b3280b5a8c365ab48d985c7dfa93570db/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd
870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/483b0956a4e375533cceb20533b79b8ea13606751224172f6843ef594e2d9e57/merged",
	                "UpperDir": "/var/lib/docker/overlay2/483b0956a4e375533cceb20533b79b8ea13606751224172f6843ef594e2d9e57/diff",
	                "WorkDir": "/var/lib/docker/overlay2/483b0956a4e375533cceb20533b79b8ea13606751224172f6843ef594e2d9e57/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220511225632-7294",
	                "Source": "/var/lib/docker/volumes/functional-20220511225632-7294/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220511225632-7294",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220511225632-7294",
	                "name.minikube.sigs.k8s.io": "functional-20220511225632-7294",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f5692a00c47bb6e269a72d36be2abfd0bb10f0fff3ae9930094e35c3b20de923",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49165"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49164"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f5692a00c47b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220511225632-7294": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "297a9b29d753",
	                        "functional-20220511225632-7294"
	                    ],
	                    "NetworkID": "076a6e5580142bce22350a0b3a99d49b2e6d366be2df778a62fec88cae17dac6",
	                    "EndpointID": "98f9ea35af862e48882bd02b47885c015b7cd19f8dc83bff4751c03e2d458cc3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20220511225632-7294 -n functional-20220511225632-7294
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 logs -n 25: (1.330982527s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |            Profile             |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-20220511225550-7294                                               | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	|         | --log_dir                                                                |                                |         |         |                     |                     |
	|         | /tmp/nospam-20220511225550-7294                                          |                                |         |         |                     |                     |
	|         | pause                                                                    |                                |         |         |                     |                     |
	| unpause | nospam-20220511225550-7294                                               | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	|         | --log_dir                                                                |                                |         |         |                     |                     |
	|         | /tmp/nospam-20220511225550-7294                                          |                                |         |         |                     |                     |
	|         | unpause                                                                  |                                |         |         |                     |                     |
	| unpause | nospam-20220511225550-7294                                               | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	|         | --log_dir                                                                |                                |         |         |                     |                     |
	|         | /tmp/nospam-20220511225550-7294                                          |                                |         |         |                     |                     |
	|         | unpause                                                                  |                                |         |         |                     |                     |
	| unpause | nospam-20220511225550-7294                                               | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	|         | --log_dir                                                                |                                |         |         |                     |                     |
	|         | /tmp/nospam-20220511225550-7294                                          |                                |         |         |                     |                     |
	|         | unpause                                                                  |                                |         |         |                     |                     |
	| stop    | nospam-20220511225550-7294                                               | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	|         | --log_dir                                                                |                                |         |         |                     |                     |
	|         | /tmp/nospam-20220511225550-7294                                          |                                |         |         |                     |                     |
	|         | stop                                                                     |                                |         |         |                     |                     |
	| stop    | nospam-20220511225550-7294                                               | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	|         | --log_dir                                                                |                                |         |         |                     |                     |
	|         | /tmp/nospam-20220511225550-7294                                          |                                |         |         |                     |                     |
	|         | stop                                                                     |                                |         |         |                     |                     |
	| stop    | nospam-20220511225550-7294                                               | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	|         | --log_dir                                                                |                                |         |         |                     |                     |
	|         | /tmp/nospam-20220511225550-7294                                          |                                |         |         |                     |                     |
	|         | stop                                                                     |                                |         |         |                     |                     |
	| delete  | -p nospam-20220511225550-7294                                            | nospam-20220511225550-7294     | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:56 UTC |
	| start   | -p                                                                       | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:56 UTC | 11 May 22 22:57 UTC |
	|         | functional-20220511225632-7294                                           |                                |         |         |                     |                     |
	|         | --memory=4000                                                            |                                |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                                |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                                |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                                |         |         |                     |                     |
	| start   | -p                                                                       | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | functional-20220511225632-7294                                           |                                |         |         |                     |                     |
	|         | --alsologtostderr -v=8                                                   |                                |         |         |                     |                     |
	| cache   | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | cache add k8s.gcr.io/pause:3.1                                           |                                |         |         |                     |                     |
	| cache   | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | cache add k8s.gcr.io/pause:3.3                                           |                                |         |         |                     |                     |
	| cache   | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | cache add                                                                |                                |         |         |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                  |                                |         |         |                     |                     |
	| cache   | functional-20220511225632-7294 cache add                                 | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | minikube-local-cache-test:functional-20220511225632-7294                 |                                |         |         |                     |                     |
	| cache   | functional-20220511225632-7294 cache delete                              | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | minikube-local-cache-test:functional-20220511225632-7294                 |                                |         |         |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3                                              | minikube                       | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	| cache   | list                                                                     | minikube                       | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	| ssh     | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | ssh sudo crictl images                                                   |                                |         |         |                     |                     |
	| ssh     | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | ssh sudo docker rmi                                                      |                                |         |         |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                  |                                |         |         |                     |                     |
	| cache   | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | cache reload                                                             |                                |         |         |                     |                     |
	| ssh     | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | ssh sudo crictl inspecti                                                 |                                |         |         |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                  |                                |         |         |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1                                              | minikube                       | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	| cache   | delete k8s.gcr.io/pause:latest                                           | minikube                       | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	| kubectl | functional-20220511225632-7294                                           | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | kubectl -- --context                                                     |                                |         |         |                     |                     |
	|         | functional-20220511225632-7294                                           |                                |         |         |                     |                     |
	|         | get pods                                                                 |                                |         |         |                     |                     |
	| start   | -p functional-20220511225632-7294                                        | functional-20220511225632-7294 | jenkins | v1.25.2 | 11 May 22 22:57 UTC | 11 May 22 22:57 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                                |         |         |                     |                     |
	|         | --wait=all                                                               |                                |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 22:57:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 22:57:26.299711   35457 out.go:296] Setting OutFile to fd 1 ...
	I0511 22:57:26.299843   35457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:57:26.299846   35457 out.go:309] Setting ErrFile to fd 2...
	I0511 22:57:26.299850   35457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:57:26.299951   35457 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 22:57:26.300218   35457 out.go:303] Setting JSON to false
	I0511 22:57:26.301414   35457 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2388,"bootTime":1652307458,"procs":529,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 22:57:26.301485   35457 start.go:125] virtualization: kvm guest
	I0511 22:57:26.304447   35457 out.go:177] * [functional-20220511225632-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0511 22:57:26.306068   35457 notify.go:193] Checking for updates...
	I0511 22:57:26.307958   35457 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 22:57:26.309833   35457 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 22:57:26.311746   35457 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 22:57:26.313435   35457 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 22:57:26.315074   35457 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0511 22:57:26.317023   35457 config.go:178] Loaded profile config "functional-20220511225632-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 22:57:26.317070   35457 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 22:57:26.357468   35457 docker.go:137] docker version: linux-20.10.15
	I0511 22:57:26.357567   35457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:57:26.460382   35457 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-11 22:57:26.386005201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:57:26.460485   35457 docker.go:254] overlay module found
	I0511 22:57:26.462947   35457 out.go:177] * Using the docker driver based on existing profile
	I0511 22:57:26.464441   35457 start.go:284] selected driver: docker
	I0511 22:57:26.464447   35457 start.go:801] validating driver "docker" against &{Name:functional-20220511225632-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511225632-7294 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provision
er-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:57:26.464557   35457 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 22:57:26.464721   35457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:57:26.563830   35457 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-11 22:57:26.493941054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:57:26.564371   35457 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0511 22:57:26.564388   35457 cni.go:95] Creating CNI manager for ""
	I0511 22:57:26.564394   35457 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:57:26.564400   35457 start_flags.go:306] config:
	{Name:functional-20220511225632-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511225632-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisione
r-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:57:26.566990   35457 out.go:177] * Starting control plane node functional-20220511225632-7294 in cluster functional-20220511225632-7294
	I0511 22:57:26.568711   35457 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 22:57:26.570103   35457 out.go:177] * Pulling base image ...
	I0511 22:57:26.571496   35457 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 22:57:26.571537   35457 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 22:57:26.571544   35457 cache.go:57] Caching tarball of preloaded images
	I0511 22:57:26.571613   35457 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 22:57:26.571779   35457 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0511 22:57:26.571788   35457 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0511 22:57:26.571910   35457 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/config.json ...
	I0511 22:57:26.615635   35457 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 22:57:26.615656   35457 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0511 22:57:26.615674   35457 cache.go:206] Successfully downloaded all kic artifacts
	I0511 22:57:26.615706   35457 start.go:352] acquiring machines lock for functional-20220511225632-7294: {Name:mk5a79e9556bd14104aeb40a2a2857a7cd7b6620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 22:57:26.615820   35457 start.go:356] acquired machines lock for "functional-20220511225632-7294" in 97.809µs
	I0511 22:57:26.615837   35457 start.go:94] Skipping create...Using existing machine configuration
	I0511 22:57:26.615841   35457 fix.go:55] fixHost starting: 
	I0511 22:57:26.616071   35457 cli_runner.go:164] Run: docker container inspect functional-20220511225632-7294 --format={{.State.Status}}
	I0511 22:57:26.649570   35457 fix.go:103] recreateIfNeeded on functional-20220511225632-7294: state=Running err=<nil>
	W0511 22:57:26.649589   35457 fix.go:129] unexpected machine state, will restart: <nil>
	I0511 22:57:26.653290   35457 out.go:177] * Updating the running docker "functional-20220511225632-7294" container ...
	I0511 22:57:26.655032   35457 machine.go:88] provisioning docker machine ...
	I0511 22:57:26.655061   35457 ubuntu.go:169] provisioning hostname "functional-20220511225632-7294"
	I0511 22:57:26.655106   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:26.687284   35457 main.go:134] libmachine: Using SSH client type: native
	I0511 22:57:26.687456   35457 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49167 <nil> <nil>}
	I0511 22:57:26.687469   35457 main.go:134] libmachine: About to run SSH command:
	sudo hostname functional-20220511225632-7294 && echo "functional-20220511225632-7294" | sudo tee /etc/hostname
	I0511 22:57:26.803241   35457 main.go:134] libmachine: SSH cmd err, output: <nil>: functional-20220511225632-7294
	
	I0511 22:57:26.803316   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:26.835722   35457 main.go:134] libmachine: Using SSH client type: native
	I0511 22:57:26.835854   35457 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49167 <nil> <nil>}
	I0511 22:57:26.835866   35457 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20220511225632-7294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20220511225632-7294/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20220511225632-7294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0511 22:57:26.946015   35457 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0511 22:57:26.946033   35457 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
	I0511 22:57:26.946050   35457 ubuntu.go:177] setting up certificates
	I0511 22:57:26.946057   35457 provision.go:83] configureAuth start
	I0511 22:57:26.946096   35457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220511225632-7294
	I0511 22:57:26.978133   35457 provision.go:138] copyHostCerts
	I0511 22:57:26.978185   35457 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
	I0511 22:57:26.978197   35457 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
	I0511 22:57:26.978260   35457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
	I0511 22:57:26.978357   35457 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
	I0511 22:57:26.978362   35457 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
	I0511 22:57:26.978387   35457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1078 bytes)
	I0511 22:57:26.978438   35457 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
	I0511 22:57:26.978441   35457 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
	I0511 22:57:26.978459   35457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
	I0511 22:57:26.978494   35457 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.functional-20220511225632-7294 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20220511225632-7294]
	I0511 22:57:27.267427   35457 provision.go:172] copyRemoteCerts
	I0511 22:57:27.267471   35457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0511 22:57:27.267501   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:27.300911   35457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/functional-20220511225632-7294/id_rsa Username:docker}
	I0511 22:57:27.381508   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0511 22:57:27.399033   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0511 22:57:27.415773   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0511 22:57:27.432458   35457 provision.go:86] duration metric: configureAuth took 486.392353ms
	I0511 22:57:27.432474   35457 ubuntu.go:193] setting minikube options for container-runtime
	I0511 22:57:27.432694   35457 config.go:178] Loaded profile config "functional-20220511225632-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 22:57:27.432775   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:27.464732   35457 main.go:134] libmachine: Using SSH client type: native
	I0511 22:57:27.464868   35457 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49167 <nil> <nil>}
	I0511 22:57:27.464875   35457 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0511 22:57:27.574365   35457 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0511 22:57:27.574386   35457 ubuntu.go:71] root file system type: overlay
	I0511 22:57:27.574530   35457 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0511 22:57:27.574575   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:27.606324   35457 main.go:134] libmachine: Using SSH client type: native
	I0511 22:57:27.606456   35457 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49167 <nil> <nil>}
	I0511 22:57:27.606510   35457 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0511 22:57:27.722841   35457 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0511 22:57:27.722924   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:27.755417   35457 main.go:134] libmachine: Using SSH client type: native
	I0511 22:57:27.755545   35457 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49167 <nil> <nil>}
	I0511 22:57:27.755557   35457 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0511 22:57:27.865804   35457 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0511 22:57:27.865821   35457 machine.go:91] provisioned docker machine in 1.210780322s
	I0511 22:57:27.865832   35457 start.go:306] post-start starting for "functional-20220511225632-7294" (driver="docker")
	I0511 22:57:27.865837   35457 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0511 22:57:27.865903   35457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0511 22:57:27.865931   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:27.897770   35457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/functional-20220511225632-7294/id_rsa Username:docker}
	I0511 22:57:27.981589   35457 ssh_runner.go:195] Run: cat /etc/os-release
	I0511 22:57:27.984481   35457 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0511 22:57:27.984501   35457 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0511 22:57:27.984516   35457 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0511 22:57:27.984521   35457 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0511 22:57:27.984529   35457 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
	I0511 22:57:27.984595   35457 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
	I0511 22:57:27.984675   35457 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem -> 72942.pem in /etc/ssl/certs
	I0511 22:57:27.984757   35457 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/test/nested/copy/7294/hosts -> hosts in /etc/test/nested/copy/7294
	I0511 22:57:27.984794   35457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/7294
	I0511 22:57:27.991584   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem --> /etc/ssl/certs/72942.pem (1708 bytes)
	I0511 22:57:28.008816   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/test/nested/copy/7294/hosts --> /etc/test/nested/copy/7294/hosts (40 bytes)
	I0511 22:57:28.027091   35457 start.go:309] post-start completed in 161.244551ms
	I0511 22:57:28.027159   35457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 22:57:28.027213   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:28.062710   35457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/functional-20220511225632-7294/id_rsa Username:docker}
	I0511 22:57:28.142816   35457 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0511 22:57:28.146782   35457 fix.go:57] fixHost completed within 1.530931516s
	I0511 22:57:28.146802   35457 start.go:81] releasing machines lock for "functional-20220511225632-7294", held for 1.530970154s
	I0511 22:57:28.146878   35457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220511225632-7294
	I0511 22:57:28.180169   35457 ssh_runner.go:195] Run: systemctl --version
	I0511 22:57:28.180205   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:28.180271   35457 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0511 22:57:28.180315   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:28.213717   35457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/functional-20220511225632-7294/id_rsa Username:docker}
	I0511 22:57:28.217187   35457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/functional-20220511225632-7294/id_rsa Username:docker}
	I0511 22:57:28.318492   35457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0511 22:57:28.327916   35457 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 22:57:28.337194   35457 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0511 22:57:28.337236   35457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0511 22:57:28.346220   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0511 22:57:28.359402   35457 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0511 22:57:28.456254   35457 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0511 22:57:28.554663   35457 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 22:57:28.564301   35457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0511 22:57:28.659872   35457 ssh_runner.go:195] Run: sudo systemctl start docker
	I0511 22:57:28.669664   35457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 22:57:28.708664   35457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 22:57:28.748256   35457 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0511 22:57:28.748346   35457 cli_runner.go:164] Run: docker network inspect functional-20220511225632-7294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 22:57:28.779465   35457 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0511 22:57:28.785079   35457 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0511 22:57:28.786749   35457 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 22:57:28.786796   35457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 22:57:28.818039   35457 docker.go:610] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220511225632-7294
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0511 22:57:28.818061   35457 docker.go:541] Images already preloaded, skipping extraction
	I0511 22:57:28.818109   35457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 22:57:28.849860   35457 docker.go:610] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220511225632-7294
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0511 22:57:28.849877   35457 cache_images.go:84] Images are preloaded, skipping loading
	I0511 22:57:28.849926   35457 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0511 22:57:28.930423   35457 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0511 22:57:28.930455   35457 cni.go:95] Creating CNI manager for ""
	I0511 22:57:28.930463   35457 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:57:28.930473   35457 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0511 22:57:28.930485   35457 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20220511225632-7294 NodeName:functional-20220511225632-7294 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0511 22:57:28.930598   35457 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20220511225632-7294"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0511 22:57:28.930661   35457 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20220511225632-7294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:functional-20220511225632-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0511 22:57:28.930702   35457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0511 22:57:28.937732   35457 binaries.go:44] Found k8s binaries, skipping transfer
	I0511 22:57:28.937777   35457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0511 22:57:28.944630   35457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0511 22:57:28.957779   35457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0511 22:57:28.970412   35457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1902 bytes)
	I0511 22:57:28.982839   35457 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0511 22:57:28.985876   35457 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294 for IP: 192.168.49.2
	I0511 22:57:28.985970   35457 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
	I0511 22:57:28.986000   35457 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
	I0511 22:57:28.986058   35457 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.key
	I0511 22:57:28.986096   35457 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/apiserver.key.dd3b5fb2
	I0511 22:57:28.986161   35457 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/proxy-client.key
	I0511 22:57:28.986295   35457 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294.pem (1338 bytes)
	W0511 22:57:28.986324   35457 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294_empty.pem, impossibly tiny 0 bytes
	I0511 22:57:28.986332   35457 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
	I0511 22:57:28.986354   35457 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1078 bytes)
	I0511 22:57:28.986379   35457 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
	I0511 22:57:28.986397   35457 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
	I0511 22:57:28.986439   35457 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem (1708 bytes)
	I0511 22:57:28.987027   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0511 22:57:29.004997   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0511 22:57:29.022540   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0511 22:57:29.039827   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0511 22:57:29.057082   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0511 22:57:29.075018   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0511 22:57:29.092492   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0511 22:57:29.109681   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0511 22:57:29.127542   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294.pem --> /usr/share/ca-certificates/7294.pem (1338 bytes)
	I0511 22:57:29.144900   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem --> /usr/share/ca-certificates/72942.pem (1708 bytes)
	I0511 22:57:29.161806   35457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0511 22:57:29.178889   35457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0511 22:57:29.191516   35457 ssh_runner.go:195] Run: openssl version
	I0511 22:57:29.196437   35457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7294.pem && ln -fs /usr/share/ca-certificates/7294.pem /etc/ssl/certs/7294.pem"
	I0511 22:57:29.203994   35457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7294.pem
	I0511 22:57:29.207040   35457 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 22:56 /usr/share/ca-certificates/7294.pem
	I0511 22:57:29.207076   35457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7294.pem
	I0511 22:57:29.211935   35457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7294.pem /etc/ssl/certs/51391683.0"
	I0511 22:57:29.218717   35457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72942.pem && ln -fs /usr/share/ca-certificates/72942.pem /etc/ssl/certs/72942.pem"
	I0511 22:57:29.225976   35457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72942.pem
	I0511 22:57:29.229001   35457 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 22:56 /usr/share/ca-certificates/72942.pem
	I0511 22:57:29.229054   35457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72942.pem
	I0511 22:57:29.233716   35457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72942.pem /etc/ssl/certs/3ec20f2e.0"
	I0511 22:57:29.240292   35457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0511 22:57:29.247330   35457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0511 22:57:29.250319   35457 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0511 22:57:29.250366   35457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0511 22:57:29.255227   35457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0511 22:57:29.261884   35457 kubeadm.go:391] StartCluster: {Name:functional-20220511225632-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511225632-7294 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false regi
stry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:57:29.262024   35457 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0511 22:57:29.292432   35457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0511 22:57:29.299532   35457 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0511 22:57:29.299546   35457 kubeadm.go:601] restartCluster start
	I0511 22:57:29.299583   35457 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0511 22:57:29.305635   35457 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0511 22:57:29.306083   35457 kubeconfig.go:92] found "functional-20220511225632-7294" server: "https://192.168.49.2:8441"
	I0511 22:57:29.306928   35457 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0511 22:57:29.313424   35457 kubeadm.go:569] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-05-11 22:56:45.043648012 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-05-11 22:57:28.975794748 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0511 22:57:29.313434   35457 kubeadm.go:1067] stopping kube-system containers ...
	I0511 22:57:29.313473   35457 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0511 22:57:29.346035   35457 docker.go:442] Stopping containers: [6d0ac8abd5b0 34aff32b4412 f70fad5bd078 78cac41764a3 d142d56ab3a5 96f3c0503a19 1221efbd7e06 ad029eb198e2 eb08f8db1d6e c1806e43aa64 5960bc7b5187 49e2085959b2 df57c21cb948 215089bd9ae9 16e87e1858eb]
	I0511 22:57:29.346092   35457 ssh_runner.go:195] Run: docker stop 6d0ac8abd5b0 34aff32b4412 f70fad5bd078 78cac41764a3 d142d56ab3a5 96f3c0503a19 1221efbd7e06 ad029eb198e2 eb08f8db1d6e c1806e43aa64 5960bc7b5187 49e2085959b2 df57c21cb948 215089bd9ae9 16e87e1858eb
	I0511 22:57:34.598708   35457 ssh_runner.go:235] Completed: docker stop 6d0ac8abd5b0 34aff32b4412 f70fad5bd078 78cac41764a3 d142d56ab3a5 96f3c0503a19 1221efbd7e06 ad029eb198e2 eb08f8db1d6e c1806e43aa64 5960bc7b5187 49e2085959b2 df57c21cb948 215089bd9ae9 16e87e1858eb: (5.252582863s)
	I0511 22:57:34.598756   35457 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0511 22:57:34.685307   35457 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0511 22:57:34.692420   35457 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 11 22:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 11 22:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 11 22:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 11 22:56 /etc/kubernetes/scheduler.conf
	
	I0511 22:57:34.692468   35457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0511 22:57:34.699365   35457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0511 22:57:34.706220   35457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0511 22:57:34.712599   35457 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0511 22:57:34.712647   35457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0511 22:57:34.719093   35457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0511 22:57:34.725477   35457 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0511 22:57:34.725516   35457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0511 22:57:34.731720   35457 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0511 22:57:34.738234   35457 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0511 22:57:34.738247   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0511 22:57:34.781129   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0511 22:57:35.404939   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0511 22:57:35.552229   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0511 22:57:35.601895   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0511 22:57:35.679789   35457 api_server.go:51] waiting for apiserver process to appear ...
	I0511 22:57:35.679833   35457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0511 22:57:35.689889   35457 api_server.go:71] duration metric: took 10.100742ms to wait for apiserver process to appear ...
	I0511 22:57:35.689909   35457 api_server.go:87] waiting for apiserver healthz status ...
	I0511 22:57:35.689917   35457 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0511 22:57:35.696026   35457 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0511 22:57:35.702712   35457 api_server.go:140] control plane version: v1.23.5
	I0511 22:57:35.702727   35457 api_server.go:130] duration metric: took 12.814549ms to wait for apiserver health ...
	I0511 22:57:35.702735   35457 cni.go:95] Creating CNI manager for ""
	I0511 22:57:35.702741   35457 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:57:35.702746   35457 system_pods.go:43] waiting for kube-system pods to appear ...
	I0511 22:57:35.711211   35457 system_pods.go:59] 7 kube-system pods found
	I0511 22:57:35.711232   35457 system_pods.go:61] "coredns-64897985d-rp5lc" [88948fa9-8a33-48b9-b0c6-4e9a46669f71] Running
	I0511 22:57:35.711241   35457 system_pods.go:61] "etcd-functional-20220511225632-7294" [f312edb5-af11-4192-85ec-7695ee2b2a25] Running
	I0511 22:57:35.711247   35457 system_pods.go:61] "kube-apiserver-functional-20220511225632-7294" [a4f35d6f-dcbc-443e-8e7e-1fc37b28b6b5] Running
	I0511 22:57:35.711253   35457 system_pods.go:61] "kube-controller-manager-functional-20220511225632-7294" [61c4db7c-be56-4869-8785-302ce3fce852] Running
	I0511 22:57:35.711259   35457 system_pods.go:61] "kube-proxy-dvl88" [2a0a0c62-17ad-4a54-a1c2-c7afd88c9c38] Running
	I0511 22:57:35.711271   35457 system_pods.go:61] "kube-scheduler-functional-20220511225632-7294" [e0571f38-423a-457a-aefb-b2857ed93938] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0511 22:57:35.711280   35457 system_pods.go:61] "storage-provisioner" [13d6b36a-da63-427d-9a0c-67cc25dc9131] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0511 22:57:35.711285   35457 system_pods.go:74] duration metric: took 8.534921ms to wait for pod list to return data ...
	I0511 22:57:35.711294   35457 node_conditions.go:102] verifying NodePressure condition ...
	I0511 22:57:35.714732   35457 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0511 22:57:35.714750   35457 node_conditions.go:123] node cpu capacity is 8
	I0511 22:57:35.714763   35457 node_conditions.go:105] duration metric: took 3.464684ms to run NodePressure ...
	I0511 22:57:35.714783   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0511 22:57:36.258598   35457 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0511 22:57:36.264138   35457 kubeadm.go:752] kubelet initialised
	I0511 22:57:36.264152   35457 kubeadm.go:753] duration metric: took 5.535509ms waiting for restarted kubelet to initialise ...
	I0511 22:57:36.264159   35457 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 22:57:36.269834   35457 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-rp5lc" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:36.275460   35457 pod_ready.go:97] node "functional-20220511225632-7294" hosting pod "coredns-64897985d-rp5lc" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.275473   35457 pod_ready.go:81] duration metric: took 5.626441ms waiting for pod "coredns-64897985d-rp5lc" in "kube-system" namespace to be "Ready" ...
	E0511 22:57:36.275483   35457 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220511225632-7294" hosting pod "coredns-64897985d-rp5lc" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.275508   35457 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:36.280386   35457 pod_ready.go:97] node "functional-20220511225632-7294" hosting pod "etcd-functional-20220511225632-7294" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.280398   35457 pod_ready.go:81] duration metric: took 4.881858ms waiting for pod "etcd-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	E0511 22:57:36.280408   35457 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220511225632-7294" hosting pod "etcd-functional-20220511225632-7294" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.280439   35457 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:36.284840   35457 pod_ready.go:97] node "functional-20220511225632-7294" hosting pod "kube-apiserver-functional-20220511225632-7294" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.284852   35457 pod_ready.go:81] duration metric: took 4.405653ms waiting for pod "kube-apiserver-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	E0511 22:57:36.284859   35457 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220511225632-7294" hosting pod "kube-apiserver-functional-20220511225632-7294" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.284880   35457 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:36.289473   35457 pod_ready.go:97] node "functional-20220511225632-7294" hosting pod "kube-controller-manager-functional-20220511225632-7294" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.289489   35457 pod_ready.go:81] duration metric: took 4.600784ms waiting for pod "kube-controller-manager-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	E0511 22:57:36.289499   35457 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220511225632-7294" hosting pod "kube-controller-manager-functional-20220511225632-7294" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220511225632-7294" has status "Ready":"False"
	I0511 22:57:36.289524   35457 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dvl88" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:36.661705   35457 pod_ready.go:92] pod "kube-proxy-dvl88" in "kube-system" namespace has status "Ready":"True"
	I0511 22:57:36.661715   35457 pod_ready.go:81] duration metric: took 372.183894ms waiting for pod "kube-proxy-dvl88" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:36.661728   35457 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:38.063249   35457 pod_ready.go:97] error getting pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220511225632-7294": dial tcp 192.168.49.2:8441: connect: connection refused
	I0511 22:57:38.063275   35457 pod_ready.go:81] duration metric: took 1.40153873s waiting for pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	E0511 22:57:38.063287   35457 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220511225632-7294": dial tcp 192.168.49.2:8441: connect: connection refused
	I0511 22:57:38.063307   35457 pod_ready.go:38] duration metric: took 1.799139276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 22:57:38.063321   35457 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0511 22:57:38.071117   35457 kubeadm.go:761] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0511 22:57:38.071133   35457 kubeadm.go:605] restartCluster took 8.771583459s
	I0511 22:57:38.071139   35457 kubeadm.go:393] StartCluster complete in 8.809261538s
	I0511 22:57:38.071156   35457 settings.go:142] acquiring lock: {Name:mk1287875a6024bfdfd8882975fa4d7c31d85e31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 22:57:38.071259   35457 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 22:57:38.071834   35457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig: {Name:mka611e3c6ccae6ff6a6751a4f0fde8a6d2789a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0511 22:57:38.073027   35457 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0511 22:57:40.309257   35457 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20220511225632-7294" rescaled to 1
	I0511 22:57:40.309329   35457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0511 22:57:40.309440   35457 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 22:57:40.309708   35457 config.go:178] Loaded profile config "functional-20220511225632-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 22:57:40.312831   35457 out.go:177] * Verifying Kubernetes components...
	I0511 22:57:40.309821   35457 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0511 22:57:40.314249   35457 addons.go:65] Setting storage-provisioner=true in profile "functional-20220511225632-7294"
	I0511 22:57:40.314262   35457 addons.go:153] Setting addon storage-provisioner=true in "functional-20220511225632-7294"
	W0511 22:57:40.314267   35457 addons.go:165] addon storage-provisioner should already be in state true
	I0511 22:57:40.314265   35457 addons.go:65] Setting default-storageclass=true in profile "functional-20220511225632-7294"
	I0511 22:57:40.314278   35457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 22:57:40.314283   35457 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20220511225632-7294"
	I0511 22:57:40.314303   35457 host.go:66] Checking if "functional-20220511225632-7294" exists ...
	I0511 22:57:40.314550   35457 cli_runner.go:164] Run: docker container inspect functional-20220511225632-7294 --format={{.State.Status}}
	I0511 22:57:40.314647   35457 cli_runner.go:164] Run: docker container inspect functional-20220511225632-7294 --format={{.State.Status}}
	I0511 22:57:40.349957   35457 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0511 22:57:40.351574   35457 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 22:57:40.351582   35457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0511 22:57:40.351624   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:40.399936   35457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/functional-20220511225632-7294/id_rsa Username:docker}
	I0511 22:57:40.468752   35457 addons.go:153] Setting addon default-storageclass=true in "functional-20220511225632-7294"
	W0511 22:57:40.468770   35457 addons.go:165] addon default-storageclass should already be in state true
	I0511 22:57:40.468802   35457 host.go:66] Checking if "functional-20220511225632-7294" exists ...
	I0511 22:57:40.469194   35457 cli_runner.go:164] Run: docker container inspect functional-20220511225632-7294 --format={{.State.Status}}
	I0511 22:57:40.506296   35457 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0511 22:57:40.506307   35457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0511 22:57:40.506349   35457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220511225632-7294
	I0511 22:57:40.538750   35457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/functional-20220511225632-7294/id_rsa Username:docker}
	I0511 22:57:40.588255   35457 start.go:795] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0511 22:57:40.588252   35457 node_ready.go:35] waiting up to 6m0s for node "functional-20220511225632-7294" to be "Ready" ...
	I0511 22:57:40.590936   35457 node_ready.go:49] node "functional-20220511225632-7294" has status "Ready":"True"
	I0511 22:57:40.590945   35457 node_ready.go:38] duration metric: took 2.673051ms waiting for node "functional-20220511225632-7294" to be "Ready" ...
	I0511 22:57:40.590954   35457 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 22:57:40.597044   35457 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-rp5lc" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.601060   35457 pod_ready.go:92] pod "coredns-64897985d-rp5lc" in "kube-system" namespace has status "Ready":"True"
	I0511 22:57:40.601067   35457 pod_ready.go:81] duration metric: took 4.00719ms waiting for pod "coredns-64897985d-rp5lc" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.601074   35457 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.604731   35457 pod_ready.go:92] pod "etcd-functional-20220511225632-7294" in "kube-system" namespace has status "Ready":"True"
	I0511 22:57:40.604741   35457 pod_ready.go:81] duration metric: took 3.660833ms waiting for pod "etcd-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.604751   35457 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.608733   35457 pod_ready.go:92] pod "kube-controller-manager-functional-20220511225632-7294" in "kube-system" namespace has status "Ready":"True"
	I0511 22:57:40.608742   35457 pod_ready.go:81] duration metric: took 3.985273ms waiting for pod "kube-controller-manager-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.608749   35457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvl88" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.656204   35457 pod_ready.go:92] pod "kube-proxy-dvl88" in "kube-system" namespace has status "Ready":"True"
	I0511 22:57:40.656215   35457 pod_ready.go:81] duration metric: took 47.460341ms waiting for pod "kube-proxy-dvl88" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.656226   35457 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:40.668168   35457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0511 22:57:40.668425   35457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 22:57:41.383828   35457 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0511 22:57:41.385658   35457 addons.go:417] enableAddons completed in 1.075835131s
	I0511 22:57:42.997546   35457 pod_ready.go:102] pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace has status "Ready":"False"
	I0511 22:57:45.496973   35457 pod_ready.go:102] pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace has status "Ready":"False"
	I0511 22:57:47.997491   35457 pod_ready.go:102] pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace has status "Ready":"False"
	I0511 22:57:49.996931   35457 pod_ready.go:92] pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace has status "Ready":"True"
	I0511 22:57:49.996949   35457 pod_ready.go:81] duration metric: took 9.34071682s waiting for pod "kube-scheduler-functional-20220511225632-7294" in "kube-system" namespace to be "Ready" ...
	I0511 22:57:49.996958   35457 pod_ready.go:38] duration metric: took 9.405994611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 22:57:49.996976   35457 api_server.go:51] waiting for apiserver process to appear ...
	I0511 22:57:49.997025   35457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0511 22:57:50.006522   35457 api_server.go:71] duration metric: took 9.697053726s to wait for apiserver process to appear ...
	I0511 22:57:50.006537   35457 api_server.go:87] waiting for apiserver healthz status ...
	I0511 22:57:50.006545   35457 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0511 22:57:50.011202   35457 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0511 22:57:50.012030   35457 api_server.go:140] control plane version: v1.23.5
	I0511 22:57:50.012041   35457 api_server.go:130] duration metric: took 5.499631ms to wait for apiserver health ...
	I0511 22:57:50.012048   35457 system_pods.go:43] waiting for kube-system pods to appear ...
	I0511 22:57:50.017642   35457 system_pods.go:59] 7 kube-system pods found
	I0511 22:57:50.017659   35457 system_pods.go:61] "coredns-64897985d-rp5lc" [88948fa9-8a33-48b9-b0c6-4e9a46669f71] Running
	I0511 22:57:50.017666   35457 system_pods.go:61] "etcd-functional-20220511225632-7294" [f312edb5-af11-4192-85ec-7695ee2b2a25] Running
	I0511 22:57:50.017677   35457 system_pods.go:61] "kube-apiserver-functional-20220511225632-7294" [94691691-fe44-4ba7-9eb3-9e2887c77fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0511 22:57:50.017697   35457 system_pods.go:61] "kube-controller-manager-functional-20220511225632-7294" [61c4db7c-be56-4869-8785-302ce3fce852] Running
	I0511 22:57:50.017705   35457 system_pods.go:61] "kube-proxy-dvl88" [2a0a0c62-17ad-4a54-a1c2-c7afd88c9c38] Running
	I0511 22:57:50.017711   35457 system_pods.go:61] "kube-scheduler-functional-20220511225632-7294" [e0571f38-423a-457a-aefb-b2857ed93938] Running
	I0511 22:57:50.017717   35457 system_pods.go:61] "storage-provisioner" [13d6b36a-da63-427d-9a0c-67cc25dc9131] Running
	I0511 22:57:50.017722   35457 system_pods.go:74] duration metric: took 5.669565ms to wait for pod list to return data ...
	I0511 22:57:50.017730   35457 default_sa.go:34] waiting for default service account to be created ...
	I0511 22:57:50.020156   35457 default_sa.go:45] found service account: "default"
	I0511 22:57:50.020168   35457 default_sa.go:55] duration metric: took 2.4325ms for default service account to be created ...
	I0511 22:57:50.020175   35457 system_pods.go:116] waiting for k8s-apps to be running ...
	I0511 22:57:50.025108   35457 system_pods.go:86] 7 kube-system pods found
	I0511 22:57:50.025121   35457 system_pods.go:89] "coredns-64897985d-rp5lc" [88948fa9-8a33-48b9-b0c6-4e9a46669f71] Running
	I0511 22:57:50.025126   35457 system_pods.go:89] "etcd-functional-20220511225632-7294" [f312edb5-af11-4192-85ec-7695ee2b2a25] Running
	I0511 22:57:50.025132   35457 system_pods.go:89] "kube-apiserver-functional-20220511225632-7294" [94691691-fe44-4ba7-9eb3-9e2887c77fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0511 22:57:50.025137   35457 system_pods.go:89] "kube-controller-manager-functional-20220511225632-7294" [61c4db7c-be56-4869-8785-302ce3fce852] Running
	I0511 22:57:50.025141   35457 system_pods.go:89] "kube-proxy-dvl88" [2a0a0c62-17ad-4a54-a1c2-c7afd88c9c38] Running
	I0511 22:57:50.025145   35457 system_pods.go:89] "kube-scheduler-functional-20220511225632-7294" [e0571f38-423a-457a-aefb-b2857ed93938] Running
	I0511 22:57:50.025148   35457 system_pods.go:89] "storage-provisioner" [13d6b36a-da63-427d-9a0c-67cc25dc9131] Running
	I0511 22:57:50.025153   35457 system_pods.go:126] duration metric: took 4.974177ms to wait for k8s-apps to be running ...
	I0511 22:57:50.025158   35457 system_svc.go:44] waiting for kubelet service to be running ....
	I0511 22:57:50.025195   35457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 22:57:50.035429   35457 system_svc.go:56] duration metric: took 10.2619ms WaitForService to wait for kubelet.
	I0511 22:57:50.035446   35457 kubeadm.go:548] duration metric: took 9.725983142s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0511 22:57:50.035467   35457 node_conditions.go:102] verifying NodePressure condition ...
	I0511 22:57:50.038272   35457 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0511 22:57:50.038283   35457 node_conditions.go:123] node cpu capacity is 8
	I0511 22:57:50.038292   35457 node_conditions.go:105] duration metric: took 2.82232ms to run NodePressure ...
	I0511 22:57:50.038300   35457 start.go:213] waiting for startup goroutines ...
	I0511 22:57:50.077195   35457 start.go:499] kubectl: 1.24.0, cluster: 1.23.5 (minor skew: 1)
	I0511 22:57:50.079412   35457 out.go:177] * Done! kubectl is now configured to use "functional-20220511225632-7294" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-05-11 22:56:41 UTC, end at Wed 2022-05-11 22:57:51 UTC. --
	May 11 22:56:43 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:56:43.267823223Z" level=info msg="Docker daemon" commit=4433bf6 graphdriver(s)=overlay2 version=20.10.15
	May 11 22:56:43 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:56:43.267884476Z" level=info msg="Daemon has completed initialization"
	May 11 22:56:43 functional-20220511225632-7294 systemd[1]: Started Docker Application Container Engine.
	May 11 22:56:43 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:56:43.285562381Z" level=info msg="API listen on [::]:2376"
	May 11 22:56:43 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:56:43.288863281Z" level=info msg="API listen on /var/run/docker.sock"
	May 11 22:57:16 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:16.648989102Z" level=info msg="ignoring event" container=a352b344b03e0847dcc7d67fe332bfd0bee4c4e9778b986e381a1e1ef8447235 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:16 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:16.705127003Z" level=info msg="ignoring event" container=96f3c0503a198e6ba5bfec831fea4b37abfe00b89b7035fc53f2b80793d1fb5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.403756376Z" level=info msg="ignoring event" container=16e87e1858eb49349405b242b8fa1d021f4b37a7b72658ac4af2496a2f28a964 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.569327395Z" level=info msg="ignoring event" container=34aff32b4412f2530e54bdebed564f919ccfb71156444d777a5e89115d5c1878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.570173329Z" level=info msg="ignoring event" container=1221efbd7e06205471df7e2a383d7d79fef058b46d33a14e5d2d34cfa1d6e077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.578243227Z" level=info msg="ignoring event" container=78cac41764a33fd0c605b41880fb3cee305fee97b787c7259d30cbda3d7483b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.578616150Z" level=info msg="ignoring event" container=6d0ac8abd5b03a3205a1521cb68618a95b744f73ebc5b7c7adb45da3fc3e82cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.579312284Z" level=info msg="ignoring event" container=215089bd9ae9aae649433e1fbab73912baae419bfd4d753707845504f3b98843 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.581375152Z" level=info msg="ignoring event" container=d142d56ab3a59e9bc5ffa4bac88a96a743d009c19c1d036affbcf987de718b53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.656873858Z" level=info msg="ignoring event" container=49e2085959b29b1ea1e31a45ccecb0da571df49c811e4d2aabf115912c00cc89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.657294850Z" level=info msg="ignoring event" container=5960bc7b51877b0a163536ae35120f477a43cbd368b4f366695bcdfb55621d4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.657338127Z" level=info msg="ignoring event" container=df57c21cb948c97c4113c0f5044c0414122d9e80e695a3ca6171e6f0c9289167 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.657361943Z" level=info msg="ignoring event" container=ad029eb198e2339bcbbcdef49480b3a0be286031c9f34931dacae1e809941ac1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:29 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:29.979002596Z" level=info msg="ignoring event" container=c1806e43aa640eaf138d5f05d3744a10e684dc88df1bb600c82cc16fe608c887 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:30 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:30.056241537Z" level=info msg="ignoring event" container=eb08f8db1d6e27e36a666d8405b52ad35d7e7adaef728db64bd3e79de223bb55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:30 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:30.773013963Z" level=info msg="ignoring event" container=371f9ddf15672c187f5cb7d898615742ac9ecc66ab0e259c357ef2f14c2fdb9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:34 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:34.575406995Z" level=info msg="ignoring event" container=f70fad5bd078cf07f7e7ed8154916362deba1c8b5983fc855cd901d1464cf436 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:37 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:37.284040875Z" level=info msg="ignoring event" container=a5c57b0b7b34c206536ae62d8e231382b1cc2342e88a753b577c373d438d932c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:38 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:38.009231466Z" level=info msg="ignoring event" container=067bc9c4a9736cec4cf56addc23c1078a5ac3746f2688acc12b69209072c0250 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 22:57:38 functional-20220511225632-7294 dockerd[492]: time="2022-05-11T22:57:38.068700597Z" level=info msg="ignoring event" container=0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	90b845a534869       6e38f40d628db       9 seconds ago        Running             storage-provisioner       2                   d5147b97b930d
	625a39b81866c       a4ca41631cc7a       9 seconds ago        Running             coredns                   1                   7d04e67054299
	640a51c740165       3fc1d62d65872       13 seconds ago       Running             kube-apiserver            1                   1bc5da56420aa
	a5c57b0b7b34c       3fc1d62d65872       14 seconds ago       Exited              kube-apiserver            0                   1bc5da56420aa
	371f9ddf15672       6e38f40d628db       21 seconds ago       Exited              storage-provisioner       1                   d5147b97b930d
	52e87935cb774       884d49d6d8c9f       21 seconds ago       Running             kube-scheduler            1                   1540a8f1f51f7
	eeedd61097726       25f8c7f3da61c       21 seconds ago       Running             etcd                      1                   5e995562ac047
	56d11119fb846       b0c9e5e4dbb14       21 seconds ago       Running             kube-controller-manager   1                   7479e200b6258
	5b836b9c5be8b       3c53fa8541f95       21 seconds ago       Running             kube-proxy                1                   44718ebc0a76f
	f70fad5bd078c       a4ca41631cc7a       41 seconds ago       Exited              coredns                   0                   d142d56ab3a59
	78cac41764a33       3c53fa8541f95       42 seconds ago       Exited              kube-proxy                0                   1221efbd7e062
	ad029eb198e23       b0c9e5e4dbb14       About a minute ago   Exited              kube-controller-manager   0                   49e2085959b29
	c1806e43aa640       884d49d6d8c9f       About a minute ago   Exited              kube-scheduler            0                   215089bd9ae9a
	5960bc7b51877       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   df57c21cb948c
	
	* 
	* ==> coredns [625a39b81866] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [f70fad5bd078] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220511225632-7294
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220511225632-7294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
	                    minikube.k8s.io/name=functional-20220511225632-7294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_11T22_56_55_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 May 2022 22:56:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220511225632-7294
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 May 2022 22:57:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 May 2022 22:57:36 +0000   Wed, 11 May 2022 22:56:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 May 2022 22:57:36 +0000   Wed, 11 May 2022 22:56:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 May 2022 22:57:36 +0000   Wed, 11 May 2022 22:56:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 May 2022 22:57:36 +0000   Wed, 11 May 2022 22:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220511225632-7294
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 8556a0a9a0e64ba4b825f672d2dce0b9
	  System UUID:                5880ed1f-c668-484b-8a7c-800dbd789255
	  Boot ID:                    606a2383-21e3-4a1f-9ace-302a4c5cda25
	  Kernel Version:             5.13.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.15
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-rp5lc                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     43s
	  kube-system                 etcd-functional-20220511225632-7294                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         56s
	  kube-system                 kube-apiserver-functional-20220511225632-7294             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 kube-controller-manager-functional-20220511225632-7294    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-proxy-dvl88                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-scheduler-functional-20220511225632-7294             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 15s   kube-proxy  
	  Normal  Starting                 41s   kube-proxy  
	  Normal  NodeHasSufficientMemory  56s   kubelet     Node functional-20220511225632-7294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s   kubelet     Node functional-20220511225632-7294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s   kubelet     Node functional-20220511225632-7294 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  56s   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 56s   kubelet     Starting kubelet.
	  Normal  NodeReady                45s   kubelet     Node functional-20220511225632-7294 status is now: NodeReady
	  Normal  Starting                 16s   kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    16s   kubelet     Node functional-20220511225632-7294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s   kubelet     Node functional-20220511225632-7294 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16s   kubelet     Node functional-20220511225632-7294 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  16s   kubelet     Node functional-20220511225632-7294 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  15s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                15s   kubelet     Node functional-20220511225632-7294 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May11 22:17]  #2
	[  +0.001730]  #3
	[  +0.000877]  #4
	[  +0.003053] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001948] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001470]  #5
	[  +0.000769]  #6
	[  +0.003206]  #7
	[  +0.050833] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.564823] i8042: Warning: Keylock active
	[  +0.010210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000726] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000650] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000650] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000628] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000639] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000704] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +7.967240] kauditd_printk_skb: 32 callbacks suppressed
	
	* 
	* ==> etcd [5960bc7b5187] <==
	* {"level":"info","ts":"2022-05-11T22:56:50.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220511225632-7294 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T22:56:50.579Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-11T22:56:50.580Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T22:56:50.580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T22:56:50.580Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T22:56:50.580Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-11T22:56:50.581Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-11T22:57:29.463Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-05-11T22:57:29.463Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220511225632-7294","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/05/11 22:57:29 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/05/11 22:57:29 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-05-11T22:57:29.474Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-05-11T22:57:29.476Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-11T22:57:29.477Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-11T22:57:29.477Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220511225632-7294","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [eeedd6109772] <==
	* {"level":"info","ts":"2022-05-11T22:57:30.871Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-05-11T22:57:30.873Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-05-11T22:57:30.873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-05-11T22:57:30.873Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-05-11T22:57:30.873Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T22:57:30.873Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T22:57:30.875Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-11T22:57:30.876Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-11T22:57:30.876Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-11T22:57:30.876Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-11T22:57:30.876Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-11T22:57:32.164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-05-11T22:57:32.164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-05-11T22:57:32.164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-11T22:57:32.164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-05-11T22:57:32.164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-05-11T22:57:32.164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-05-11T22:57:32.164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-05-11T22:57:32.165Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220511225632-7294 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-11T22:57:32.165Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-11T22:57:32.165Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-11T22:57:32.166Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-11T22:57:32.166Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-11T22:57:32.168Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-11T22:57:32.168Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:57:51 up 40 min,  0 users,  load average: 1.39, 1.00, 0.46
	Linux functional-20220511225632-7294 5.13.0-1025-gcp #30~20.04.1-Ubuntu SMP Tue Apr 26 03:01:25 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [640a51c74016] <==
	* I0511 22:57:40.279462       1 naming_controller.go:291] Starting NamingConditionController
	I0511 22:57:40.288433       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0511 22:57:40.288460       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0511 22:57:40.288978       1 autoregister_controller.go:141] Starting autoregister controller
	I0511 22:57:40.288998       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0511 22:57:40.289022       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0511 22:57:40.289032       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0511 22:57:40.289078       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0511 22:57:40.289637       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0511 22:57:40.289724       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0511 22:57:40.290534       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0511 22:57:40.455099       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0511 22:57:40.455166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0511 22:57:40.456401       1 cache.go:39] Caches are synced for autoregister controller
	I0511 22:57:40.456709       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	E0511 22:57:40.456733       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0511 22:57:40.476887       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0511 22:57:40.477110       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0511 22:57:40.477113       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0511 22:57:41.276757       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0511 22:57:41.276796       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0511 22:57:41.280537       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0511 22:57:45.996121       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0511 22:57:46.681702       1 controller.go:611] quota admission added evaluator for: endpoints
	I0511 22:57:46.727937       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [a5c57b0b7b34] <==
	* I0511 22:57:37.264082       1 server.go:565] external host was not specified, using 192.168.49.2
	I0511 22:57:37.264761       1 server.go:172] Version: v1.23.5
	E0511 22:57:37.265072       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [56d11119fb84] <==
	* I0511 22:57:46.595569       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0511 22:57:46.597755       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0511 22:57:46.607109       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0511 22:57:46.625289       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0511 22:57:46.630990       1 shared_informer.go:247] Caches are synced for resource quota 
	I0511 22:57:46.632226       1 shared_informer.go:247] Caches are synced for resource quota 
	I0511 22:57:46.634352       1 shared_informer.go:247] Caches are synced for job 
	I0511 22:57:46.656770       1 shared_informer.go:247] Caches are synced for disruption 
	I0511 22:57:46.656801       1 disruption.go:371] Sending events to api server.
	I0511 22:57:46.661016       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0511 22:57:46.663236       1 shared_informer.go:247] Caches are synced for stateful set 
	I0511 22:57:46.664353       1 shared_informer.go:247] Caches are synced for HPA 
	I0511 22:57:46.666486       1 shared_informer.go:247] Caches are synced for deployment 
	I0511 22:57:46.669328       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0511 22:57:46.670540       1 shared_informer.go:247] Caches are synced for endpoint 
	I0511 22:57:46.672680       1 shared_informer.go:247] Caches are synced for taint 
	I0511 22:57:46.672723       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0511 22:57:46.672751       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0511 22:57:46.672830       1 node_lifecycle_controller.go:1012] Missing timestamp for Node functional-20220511225632-7294. Assuming now as a timestamp.
	I0511 22:57:46.672867       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0511 22:57:46.673158       1 event.go:294] "Event occurred" object="functional-20220511225632-7294" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220511225632-7294 event: Registered Node functional-20220511225632-7294 in Controller"
	I0511 22:57:46.676698       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0511 22:57:47.045609       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0511 22:57:47.104526       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0511 22:57:47.104559       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [ad029eb198e2] <==
	* I0511 22:57:08.855901       1 shared_informer.go:247] Caches are synced for taint 
	I0511 22:57:08.855992       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0511 22:57:08.856012       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	W0511 22:57:08.856056       1 node_lifecycle_controller.go:1012] Missing timestamp for Node functional-20220511225632-7294. Assuming now as a timestamp.
	I0511 22:57:08.856090       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0511 22:57:08.856000       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0511 22:57:08.856156       1 shared_informer.go:247] Caches are synced for service account 
	I0511 22:57:08.856201       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0511 22:57:08.856210       1 shared_informer.go:247] Caches are synced for expand 
	I0511 22:57:08.856291       1 event.go:294] "Event occurred" object="functional-20220511225632-7294" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220511225632-7294 event: Registered Node functional-20220511225632-7294 in Controller"
	I0511 22:57:08.868144       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0511 22:57:08.875474       1 range_allocator.go:374] Set node functional-20220511225632-7294 PodCIDR to [10.244.0.0/24]
	I0511 22:57:08.958191       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-nsjmf"
	I0511 22:57:08.961935       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dvl88"
	I0511 22:57:08.970391       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-rp5lc"
	I0511 22:57:08.971981       1 shared_informer.go:247] Caches are synced for cronjob 
	I0511 22:57:09.055366       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0511 22:57:09.055828       1 shared_informer.go:247] Caches are synced for job 
	I0511 22:57:09.082936       1 shared_informer.go:247] Caches are synced for resource quota 
	I0511 22:57:09.145249       1 shared_informer.go:247] Caches are synced for resource quota 
	I0511 22:57:09.190561       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0511 22:57:09.194319       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-nsjmf"
	I0511 22:57:09.465114       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0511 22:57:09.522683       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0511 22:57:09.522706       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [5b836b9c5be8] <==
	* E0511 22:57:30.962881       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220511225632-7294": dial tcp 192.168.49.2:8441: connect: connection refused
	E0511 22:57:33.865458       1 node.go:152] Failed to retrieve node info: nodes "functional-20220511225632-7294" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	I0511 22:57:36.158518       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0511 22:57:36.158562       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0511 22:57:36.158609       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0511 22:57:36.259821       1 server_others.go:206] "Using iptables Proxier"
	I0511 22:57:36.259871       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0511 22:57:36.259884       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0511 22:57:36.259912       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0511 22:57:36.260327       1 server.go:656] "Version info" version="v1.23.5"
	I0511 22:57:36.261382       1 config.go:317] "Starting service config controller"
	I0511 22:57:36.261424       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0511 22:57:36.261426       1 config.go:226] "Starting endpoint slice config controller"
	I0511 22:57:36.261446       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0511 22:57:36.361598       1 shared_informer.go:247] Caches are synced for service config 
	I0511 22:57:36.361608       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [78cac41764a3] <==
	* I0511 22:57:09.575527       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0511 22:57:09.575643       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0511 22:57:09.575678       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0511 22:57:09.596944       1 server_others.go:206] "Using iptables Proxier"
	I0511 22:57:09.596989       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0511 22:57:09.597001       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0511 22:57:09.597026       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0511 22:57:09.597395       1 server.go:656] "Version info" version="v1.23.5"
	I0511 22:57:09.597899       1 config.go:317] "Starting service config controller"
	I0511 22:57:09.597917       1 config.go:226] "Starting endpoint slice config controller"
	I0511 22:57:09.597918       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0511 22:57:09.597927       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0511 22:57:09.698604       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0511 22:57:09.699044       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [52e87935cb77] <==
	* I0511 22:57:33.865912       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0511 22:57:33.865961       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0511 22:57:33.875332       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 22:57:33.875438       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 22:57:33.875539       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 22:57:33.875620       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 22:57:33.875644       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 22:57:33.875669       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 22:57:33.875711       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0511 22:57:33.967048       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0511 22:57:40.455197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0511 22:57:40.455356       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0511 22:57:40.455421       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0511 22:57:40.455495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0511 22:57:40.455616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0511 22:57:40.455741       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0511 22:57:40.456169       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0511 22:57:40.456188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0511 22:57:40.456204       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0511 22:57:40.456273       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0511 22:57:40.456308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0511 22:57:40.456329       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0511 22:57:40.456366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0511 22:57:40.456461       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0511 22:57:40.459030       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	
	* 
	* ==> kube-scheduler [c1806e43aa64] <==
	* E0511 22:56:52.558344       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0511 22:56:52.557457       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0511 22:56:52.558362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0511 22:56:52.557939       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0511 22:56:52.558389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0511 22:56:53.306785       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0511 22:56:53.306827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0511 22:56:53.395711       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0511 22:56:53.395742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0511 22:56:53.412891       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0511 22:56:53.412917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0511 22:56:53.424199       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0511 22:56:53.424238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0511 22:56:53.465695       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0511 22:56:53.465734       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0511 22:56:53.493541       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0511 22:56:53.493576       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0511 22:56:53.509965       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0511 22:56:53.509994       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0511 22:56:53.592646       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0511 22:56:53.592683       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0511 22:56:56.180600       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0511 22:57:29.558782       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0511 22:57:29.559000       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0511 22:57:29.559066       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-05-11 22:56:41 UTC, end at Wed 2022-05-11 22:57:51 UTC. --
	May 11 22:57:38 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:38.347540    5718 scope.go:110] "RemoveContainer" containerID="eb08f8db1d6e27e36a666d8405b52ad35d7e7adaef728db64bd3e79de223bb55"
	May 11 22:57:38 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:38.348156    5718 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: eb08f8db1d6e27e36a666d8405b52ad35d7e7adaef728db64bd3e79de223bb55" containerID="eb08f8db1d6e27e36a666d8405b52ad35d7e7adaef728db64bd3e79de223bb55"
	May 11 22:57:38 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:38.348193    5718 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:eb08f8db1d6e27e36a666d8405b52ad35d7e7adaef728db64bd3e79de223bb55} err="failed to get container status \"eb08f8db1d6e27e36a666d8405b52ad35d7e7adaef728db64bd3e79de223bb55\": rpc error: code = Unknown desc = Error: No such container: eb08f8db1d6e27e36a666d8405b52ad35d7e7adaef728db64bd3e79de223bb55"
	May 11 22:57:38 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:38.361394    5718 request.go:665] Waited for 1.052001229s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods
	May 11 22:57:39 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:39.980132    5718 remote_runtime.go:479] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404" containerID="0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404"
	May 11 22:57:39 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:39.980196    5718 kuberuntime_container.go:728] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404" pod="kube-system/kube-apiserver-functional-20220511225632-7294" podUID=ac7dfe7f1749461f5e23fe13af8b8122 containerName="kube-apiserver" containerID="docker://0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404" gracePeriod=1
	May 11 22:57:39 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:39.980217    5718 kuberuntime_container.go:753] "Kill container failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404" pod="kube-system/kube-apiserver-functional-20220511225632-7294" podUID=ac7dfe7f1749461f5e23fe13af8b8122 containerName="kube-apiserver" containerID={Type:docker ID:0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404}
	May 11 22:57:39 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:39.981591    5718 kubelet.go:1777] failed to "KillContainer" for "kube-apiserver" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: 0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404"
	May 11 22:57:39 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:39.981647    5718 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-apiserver\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: 0c161d1f0fc54e20a77d39b045aad7b0cc1f936dd47835514ec3de43d767b404\"" pod="kube-system/kube-apiserver-functional-20220511225632-7294" podUID=ac7dfe7f1749461f5e23fe13af8b8122
	May 11 22:57:39 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:39.982696    5718 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ac7dfe7f1749461f5e23fe13af8b8122 path="/var/lib/kubelet/pods/ac7dfe7f1749461f5e23fe13af8b8122/volumes"
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.311599    5718 projected.go:199] Error preparing data for projected volume kube-api-access-5xj69 for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20220511225632-7294" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220511225632-7294' and this object
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.311711    5718 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/13d6b36a-da63-427d-9a0c-67cc25dc9131-kube-api-access-5xj69 podName:13d6b36a-da63-427d-9a0c-67cc25dc9131 nodeName:}" failed. No retries permitted until 2022-05-11 22:57:40.811678818 +0000 UTC m=+5.258951565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5xj69" (UniqueName: "kubernetes.io/projected/13d6b36a-da63-427d-9a0c-67cc25dc9131-kube-api-access-5xj69") pod "storage-provisioner" (UID: "13d6b36a-da63-427d-9a0c-67cc25dc9131") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20220511225632-7294" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220511225632-7294' and this object
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.311808    5718 projected.go:199] Error preparing data for projected volume kube-api-access-fpw8c for pod kube-system/coredns-64897985d-rp5lc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20220511225632-7294" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220511225632-7294' and this object
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.311872    5718 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/88948fa9-8a33-48b9-b0c6-4e9a46669f71-kube-api-access-fpw8c podName:88948fa9-8a33-48b9-b0c6-4e9a46669f71 nodeName:}" failed. No retries permitted until 2022-05-11 22:57:40.811853171 +0000 UTC m=+5.259125917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fpw8c" (UniqueName: "kubernetes.io/projected/88948fa9-8a33-48b9-b0c6-4e9a46669f71-kube-api-access-fpw8c") pod "coredns-64897985d-rp5lc" (UID: "88948fa9-8a33-48b9-b0c6-4e9a46669f71") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20220511225632-7294" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220511225632-7294' and this object
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.311926    5718 projected.go:199] Error preparing data for projected volume kube-api-access-wjcgq for pod kube-system/kube-proxy-dvl88: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20220511225632-7294" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220511225632-7294' and this object
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.311962    5718 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2a0a0c62-17ad-4a54-a1c2-c7afd88c9c38-kube-api-access-wjcgq podName:2a0a0c62-17ad-4a54-a1c2-c7afd88c9c38 nodeName:}" failed. No retries permitted until 2022-05-11 22:57:40.811951047 +0000 UTC m=+5.259223787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wjcgq" (UniqueName: "kubernetes.io/projected/2a0a0c62-17ad-4a54-a1c2-c7afd88c9c38-kube-api-access-wjcgq") pod "kube-proxy-dvl88" (UID: "2a0a0c62-17ad-4a54-a1c2-c7afd88c9c38") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20220511225632-7294" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220511225632-7294' and this object
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:40.467768    5718 kubelet.go:1698] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-20220511225632-7294"
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.470681    5718 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-functional-20220511225632-7294\" already exists" pod="kube-system/kube-controller-manager-functional-20220511225632-7294"
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.472636    5718 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-scheduler-functional-20220511225632-7294\" already exists" pod="kube-system/kube-scheduler-functional-20220511225632-7294"
	May 11 22:57:40 functional-20220511225632-7294 kubelet[5718]: E0511 22:57:40.472662    5718 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"etcd-functional-20220511225632-7294\" already exists" pod="kube-system/etcd-functional-20220511225632-7294"
	May 11 22:57:42 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:42.265494    5718 scope.go:110] "RemoveContainer" containerID="371f9ddf15672c187f5cb7d898615742ac9ecc66ab0e259c357ef2f14c2fdb9e"
	May 11 22:57:42 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:42.391918    5718 kubelet.go:1693] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20220511225632-7294" podUID=a4f35d6f-dcbc-443e-8e7e-1fc37b28b6b5
	May 11 22:57:42 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:42.837241    5718 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-rp5lc through plugin: invalid network status for"
	May 11 22:57:43 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:43.399555    5718 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-rp5lc through plugin: invalid network status for"
	May 11 22:57:44 functional-20220511225632-7294 kubelet[5718]: I0511 22:57:44.414070    5718 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	
	* 
	* ==> storage-provisioner [371f9ddf1567] <==
	* I0511 22:57:30.676978       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0511 22:57:30.680422       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [90b845a53486] <==
	* I0511 22:57:42.368341       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0511 22:57:42.375088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0511 22:57:42.375126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20220511225632-7294 -n functional-20220511225632-7294
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220511225632-7294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220511225632-7294 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220511225632-7294 describe pod : exit status 1 (41.905518ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20220511225632-7294 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (2.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (531.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker
E0511 23:20:23.069113    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: exit status 105 (8m51.435739549s)

                                                
                                                
-- stdout --
	* [custom-weave-20220511231549-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node custom-weave-20220511231549-7294 in cluster custom-weave-20220511231549-7294
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 23:19:57.229110  232884 out.go:296] Setting OutFile to fd 1 ...
	I0511 23:19:57.229347  232884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:19:57.229360  232884 out.go:309] Setting ErrFile to fd 2...
	I0511 23:19:57.229365  232884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:19:57.229526  232884 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 23:19:57.229854  232884 out.go:303] Setting JSON to false
	I0511 23:19:57.231507  232884 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3739,"bootTime":1652307458,"procs":938,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 23:19:57.231582  232884 start.go:125] virtualization: kvm guest
	I0511 23:19:57.234527  232884 out.go:177] * [custom-weave-20220511231549-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0511 23:19:57.236298  232884 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 23:19:57.236238  232884 notify.go:193] Checking for updates...
	I0511 23:19:57.237860  232884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 23:19:57.239636  232884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 23:19:57.241521  232884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 23:19:57.243406  232884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0511 23:19:57.245395  232884 config.go:178] Loaded profile config "auto-20220511231548-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:19:57.245484  232884 config.go:178] Loaded profile config "cilium-20220511231549-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:19:57.245565  232884 config.go:178] Loaded profile config "docker-flags-20220511231637-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:19:57.245623  232884 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 23:19:57.289560  232884 docker.go:137] docker version: linux-20.10.15
	I0511 23:19:57.289675  232884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:19:57.404060  232884 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-11 23:19:57.325161239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:19:57.404176  232884 docker.go:254] overlay module found
	I0511 23:19:57.407804  232884 out.go:177] * Using the docker driver based on user configuration
	I0511 23:19:57.409244  232884 start.go:284] selected driver: docker
	I0511 23:19:57.409261  232884 start.go:801] validating driver "docker" against <nil>
	I0511 23:19:57.409282  232884 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 23:19:57.410247  232884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:19:57.518950  232884 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-11 23:19:57.442376581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:19:57.519100  232884 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 23:19:57.519257  232884 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0511 23:19:57.521645  232884 out.go:177] * Using Docker driver with the root privilege
	I0511 23:19:57.523153  232884 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
	I0511 23:19:57.523192  232884 start_flags.go:301] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0511 23:19:57.523207  232884 start_flags.go:306] config:
	{Name:custom-weave-20220511231549-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220511231549-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 23:19:57.525204  232884 out.go:177] * Starting control plane node custom-weave-20220511231549-7294 in cluster custom-weave-20220511231549-7294
	I0511 23:19:57.527813  232884 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 23:19:57.529403  232884 out.go:177] * Pulling base image ...
	I0511 23:19:57.530897  232884 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 23:19:57.530958  232884 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 23:19:57.530971  232884 cache.go:57] Caching tarball of preloaded images
	I0511 23:19:57.530994  232884 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 23:19:57.531221  232884 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0511 23:19:57.531242  232884 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0511 23:19:57.531450  232884 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/config.json ...
	I0511 23:19:57.531496  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/config.json: {Name:mk0e2a4c5afc31f9f800cdaddeac8257a0e9c56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:19:57.577710  232884 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 23:19:57.577748  232884 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0511 23:19:57.577760  232884 cache.go:206] Successfully downloaded all kic artifacts
	I0511 23:19:57.577797  232884 start.go:352] acquiring machines lock for custom-weave-20220511231549-7294: {Name:mk6bc554091c76e7a0b2fd8b59caa20830ee83cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 23:19:57.577965  232884 start.go:356] acquired machines lock for "custom-weave-20220511231549-7294" in 138.604µs
	I0511 23:19:57.578001  232884 start.go:91] Provisioning new machine with config: &{Name:custom-weave-20220511231549-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220511231549-7294 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 23:19:57.578092  232884 start.go:131] createHost starting for "" (driver="docker")
	I0511 23:19:57.580860  232884 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0511 23:19:57.581097  232884 start.go:165] libmachine.API.Create for "custom-weave-20220511231549-7294" (driver="docker")
	I0511 23:19:57.581125  232884 client.go:168] LocalClient.Create starting
	I0511 23:19:57.581201  232884 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
	I0511 23:19:57.581273  232884 main.go:134] libmachine: Decoding PEM data...
	I0511 23:19:57.581293  232884 main.go:134] libmachine: Parsing certificate...
	I0511 23:19:57.581366  232884 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
	I0511 23:19:57.581388  232884 main.go:134] libmachine: Decoding PEM data...
	I0511 23:19:57.581401  232884 main.go:134] libmachine: Parsing certificate...
	I0511 23:19:57.581831  232884 cli_runner.go:164] Run: docker network inspect custom-weave-20220511231549-7294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0511 23:19:57.613914  232884 cli_runner.go:211] docker network inspect custom-weave-20220511231549-7294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0511 23:19:57.613989  232884 network_create.go:272] running [docker network inspect custom-weave-20220511231549-7294] to gather additional debugging logs...
	I0511 23:19:57.614015  232884 cli_runner.go:164] Run: docker network inspect custom-weave-20220511231549-7294
	W0511 23:19:57.645033  232884 cli_runner.go:211] docker network inspect custom-weave-20220511231549-7294 returned with exit code 1
	I0511 23:19:57.645068  232884 network_create.go:275] error running [docker network inspect custom-weave-20220511231549-7294]: docker network inspect custom-weave-20220511231549-7294: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220511231549-7294
	I0511 23:19:57.645093  232884 network_create.go:277] output of [docker network inspect custom-weave-20220511231549-7294]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220511231549-7294
	
	** /stderr **
	I0511 23:19:57.645139  232884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 23:19:57.677919  232884 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-c71fbe990017 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b7:a8:16:0e}}
	I0511 23:19:57.678456  232884 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000609bf8] misses:0}
	I0511 23:19:57.678503  232884 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 23:19:57.678521  232884 network_create.go:115] attempt to create docker network custom-weave-20220511231549-7294 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0511 23:19:57.678563  232884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220511231549-7294
	I0511 23:19:57.748857  232884 network_create.go:99] docker network custom-weave-20220511231549-7294 192.168.58.0/24 created
	I0511 23:19:57.748891  232884 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20220511231549-7294" container
	I0511 23:19:57.748959  232884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0511 23:19:57.784793  232884 cli_runner.go:164] Run: docker volume create custom-weave-20220511231549-7294 --label name.minikube.sigs.k8s.io=custom-weave-20220511231549-7294 --label created_by.minikube.sigs.k8s.io=true
	I0511 23:19:57.823929  232884 oci.go:103] Successfully created a docker volume custom-weave-20220511231549-7294
	I0511 23:19:57.824008  232884 cli_runner.go:164] Run: docker run --rm --name custom-weave-20220511231549-7294-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220511231549-7294 --entrypoint /usr/bin/test -v custom-weave-20220511231549-7294:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0511 23:19:58.360482  232884 oci.go:107] Successfully prepared a docker volume custom-weave-20220511231549-7294
	I0511 23:19:58.360561  232884 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 23:19:58.360588  232884 kic.go:179] Starting extracting preloaded images to volume ...
	I0511 23:19:58.360668  232884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220511231549-7294:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0511 23:20:05.087689  232884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220511231549-7294:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (6.726926663s)
	I0511 23:20:05.087740  232884 kic.go:188] duration metric: took 6.727146 seconds to extract preloaded images to volume
	W0511 23:20:05.087906  232884 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0511 23:20:05.088027  232884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0511 23:20:05.227666  232884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220511231549-7294 --name custom-weave-20220511231549-7294 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220511231549-7294 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220511231549-7294 --network custom-weave-20220511231549-7294 --ip 192.168.58.2 --volume custom-weave-20220511231549-7294:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0511 23:20:05.761677  232884 cli_runner.go:164] Run: docker container inspect custom-weave-20220511231549-7294 --format={{.State.Running}}
	I0511 23:20:05.806311  232884 cli_runner.go:164] Run: docker container inspect custom-weave-20220511231549-7294 --format={{.State.Status}}
	I0511 23:20:05.855594  232884 cli_runner.go:164] Run: docker exec custom-weave-20220511231549-7294 stat /var/lib/dpkg/alternatives/iptables
	I0511 23:20:05.929797  232884 oci.go:247] the created container "custom-weave-20220511231549-7294" has a running status.
	I0511 23:20:05.929841  232884 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa...
	I0511 23:20:06.340387  232884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0511 23:20:06.432515  232884 cli_runner.go:164] Run: docker container inspect custom-weave-20220511231549-7294 --format={{.State.Status}}
	I0511 23:20:06.465120  232884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0511 23:20:06.465149  232884 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220511231549-7294 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0511 23:20:06.565672  232884 cli_runner.go:164] Run: docker container inspect custom-weave-20220511231549-7294 --format={{.State.Status}}
	I0511 23:20:06.605439  232884 machine.go:88] provisioning docker machine ...
	I0511 23:20:06.605487  232884 ubuntu.go:169] provisioning hostname "custom-weave-20220511231549-7294"
	I0511 23:20:06.605552  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:06.642530  232884 main.go:134] libmachine: Using SSH client type: native
	I0511 23:20:06.642769  232884 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0511 23:20:06.642798  232884 main.go:134] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220511231549-7294 && echo "custom-weave-20220511231549-7294" | sudo tee /etc/hostname
	I0511 23:20:06.868155  232884 main.go:134] libmachine: SSH cmd err, output: <nil>: custom-weave-20220511231549-7294
	
	I0511 23:20:06.868272  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:06.904579  232884 main.go:134] libmachine: Using SSH client type: native
	I0511 23:20:06.904779  232884 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0511 23:20:06.904810  232884 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220511231549-7294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220511231549-7294/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220511231549-7294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0511 23:20:07.014355  232884 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0511 23:20:07.014392  232884 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
	I0511 23:20:07.014419  232884 ubuntu.go:177] setting up certificates
	I0511 23:20:07.014430  232884 provision.go:83] configureAuth start
	I0511 23:20:07.014489  232884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220511231549-7294
	I0511 23:20:07.047637  232884 provision.go:138] copyHostCerts
	I0511 23:20:07.047714  232884 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
	I0511 23:20:07.047729  232884 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
	I0511 23:20:07.054610  232884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
	I0511 23:20:07.054739  232884 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
	I0511 23:20:07.054752  232884 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
	I0511 23:20:07.054783  232884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1078 bytes)
	I0511 23:20:07.054846  232884 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
	I0511 23:20:07.054855  232884 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
	I0511 23:20:07.054876  232884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
	I0511 23:20:07.054917  232884 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220511231549-7294 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220511231549-7294]
	I0511 23:20:07.254351  232884 provision.go:172] copyRemoteCerts
	I0511 23:20:07.295607  232884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0511 23:20:07.295655  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:07.330523  232884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa Username:docker}
	I0511 23:20:07.414262  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0511 23:20:07.433746  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0511 23:20:07.453110  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0511 23:20:07.472028  232884 provision.go:86] duration metric: configureAuth took 457.582557ms
	I0511 23:20:07.472064  232884 ubuntu.go:193] setting minikube options for container-runtime
	I0511 23:20:07.472252  232884 config.go:178] Loaded profile config "custom-weave-20220511231549-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:20:07.472311  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:07.512428  232884 main.go:134] libmachine: Using SSH client type: native
	I0511 23:20:07.512588  232884 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0511 23:20:07.512607  232884 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0511 23:20:07.630483  232884 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0511 23:20:07.630508  232884 ubuntu.go:71] root file system type: overlay
	I0511 23:20:07.630674  232884 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0511 23:20:07.630731  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:07.664870  232884 main.go:134] libmachine: Using SSH client type: native
	I0511 23:20:07.665039  232884 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0511 23:20:07.665099  232884 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0511 23:20:07.788784  232884 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0511 23:20:07.788869  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:07.829635  232884 main.go:134] libmachine: Using SSH client type: native
	I0511 23:20:07.829829  232884 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0511 23:20:07.829862  232884 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0511 23:20:10.677755  232884 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-11 23:20:07.784073335 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0511 23:20:10.677794  232884 machine.go:91] provisioned docker machine in 4.072327756s
	I0511 23:20:10.677805  232884 client.go:171] LocalClient.Create took 13.096674685s
	I0511 23:20:10.677819  232884 start.go:173] duration metric: libmachine.API.Create for "custom-weave-20220511231549-7294" took 13.096722657s
	I0511 23:20:10.677839  232884 start.go:306] post-start starting for "custom-weave-20220511231549-7294" (driver="docker")
	I0511 23:20:10.677845  232884 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0511 23:20:10.677928  232884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0511 23:20:10.677985  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:10.725827  232884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa Username:docker}
	I0511 23:20:10.815060  232884 ssh_runner.go:195] Run: cat /etc/os-release
	I0511 23:20:10.818069  232884 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0511 23:20:10.818101  232884 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0511 23:20:10.818147  232884 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0511 23:20:10.818155  232884 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0511 23:20:10.818173  232884 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
	I0511 23:20:10.818243  232884 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
	I0511 23:20:10.818322  232884 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem -> 72942.pem in /etc/ssl/certs
	I0511 23:20:10.818423  232884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0511 23:20:10.825907  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem --> /etc/ssl/certs/72942.pem (1708 bytes)
	I0511 23:20:10.844693  232884 start.go:309] post-start completed in 166.841358ms
	I0511 23:20:10.845067  232884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220511231549-7294
	I0511 23:20:10.904772  232884 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/config.json ...
	I0511 23:20:10.905023  232884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 23:20:10.905062  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:10.940945  232884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa Username:docker}
	I0511 23:20:11.027510  232884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0511 23:20:11.033186  232884 start.go:134] duration metric: createHost completed in 13.455081057s
	I0511 23:20:11.033218  232884 start.go:81] releasing machines lock for "custom-weave-20220511231549-7294", held for 13.455233531s
	I0511 23:20:11.033319  232884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220511231549-7294
	I0511 23:20:11.084875  232884 ssh_runner.go:195] Run: systemctl --version
	I0511 23:20:11.084932  232884 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0511 23:20:11.084947  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:11.084992  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:11.127826  232884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa Username:docker}
	I0511 23:20:11.127898  232884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa Username:docker}
	I0511 23:20:11.206204  232884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0511 23:20:11.239848  232884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 23:20:11.251955  232884 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0511 23:20:11.252035  232884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0511 23:20:11.270567  232884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0511 23:20:11.293535  232884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0511 23:20:11.429368  232884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0511 23:20:11.530830  232884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 23:20:11.540500  232884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0511 23:20:11.643565  232884 ssh_runner.go:195] Run: sudo systemctl start docker
	I0511 23:20:11.654354  232884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 23:20:11.723998  232884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 23:20:11.787602  232884 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0511 23:20:11.787705  232884 cli_runner.go:164] Run: docker network inspect custom-weave-20220511231549-7294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 23:20:11.849583  232884 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0511 23:20:11.853999  232884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 23:20:11.867856  232884 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 23:20:11.867919  232884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 23:20:11.929298  232884 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 23:20:11.929321  232884 docker.go:541] Images already preloaded, skipping extraction
	I0511 23:20:11.929366  232884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 23:20:11.979207  232884 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 23:20:11.979235  232884 cache_images.go:84] Images are preloaded, skipping loading
	I0511 23:20:11.979299  232884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0511 23:20:12.082618  232884 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
	I0511 23:20:12.082651  232884 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0511 23:20:12.082671  232884 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220511231549-7294 NodeName:custom-weave-20220511231549-7294 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0511 23:20:12.082863  232884 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220511231549-7294"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0511 23:20:12.082948  232884 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220511231549-7294 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220511231549-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0511 23:20:12.083008  232884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0511 23:20:12.092361  232884 binaries.go:44] Found k8s binaries, skipping transfer
	I0511 23:20:12.092434  232884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0511 23:20:12.102040  232884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0511 23:20:12.120093  232884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0511 23:20:12.136804  232884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0511 23:20:12.155660  232884 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0511 23:20:12.159559  232884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 23:20:12.171981  232884 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294 for IP: 192.168.58.2
	I0511 23:20:12.172187  232884 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
	I0511 23:20:12.172289  232884 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
	I0511 23:20:12.172356  232884 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/client.key
	I0511 23:20:12.172371  232884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/client.crt with IP's: []
	I0511 23:20:12.519858  232884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/client.crt ...
	I0511 23:20:12.519901  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/client.crt: {Name:mk5872dc462eb22ea6f3f44e0804c39a7b387e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:12.520171  232884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/client.key ...
	I0511 23:20:12.520195  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/client.key: {Name:mk7a439112421563d09bdd2f7e4dbd8851288145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:12.520320  232884 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.key.cee25041
	I0511 23:20:12.520341  232884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0511 23:20:12.637759  232884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.crt.cee25041 ...
	I0511 23:20:12.637799  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.crt.cee25041: {Name:mkcf22bb38622c379d63332b3755c48def1968cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:12.637997  232884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.key.cee25041 ...
	I0511 23:20:12.638011  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.key.cee25041: {Name:mk10928b83b27db67fe9db45cc43f7b8fa52ad00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:12.638104  232884 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.crt
	I0511 23:20:12.638190  232884 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.key
	I0511 23:20:12.638237  232884 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.key
	I0511 23:20:12.638250  232884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.crt with IP's: []
	I0511 23:20:12.864430  232884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.crt ...
	I0511 23:20:12.864473  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.crt: {Name:mk3d1493d83bfbc9b4ca3422d305ee2224e33fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:12.864688  232884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.key ...
	I0511 23:20:12.864709  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.key: {Name:mka444324e6188c5b6123d872c668bacab8bb42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:12.864960  232884 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294.pem (1338 bytes)
	W0511 23:20:12.865014  232884 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294_empty.pem, impossibly tiny 0 bytes
	I0511 23:20:12.865034  232884 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
	I0511 23:20:12.865079  232884 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1078 bytes)
	I0511 23:20:12.865117  232884 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
	I0511 23:20:12.865151  232884 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
	I0511 23:20:12.865216  232884 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem (1708 bytes)
	I0511 23:20:12.865994  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0511 23:20:12.890510  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0511 23:20:12.913153  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0511 23:20:12.930961  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511231549-7294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0511 23:20:12.949052  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0511 23:20:12.975117  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0511 23:20:13.004665  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0511 23:20:13.024423  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0511 23:20:13.043521  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0511 23:20:13.065045  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294.pem --> /usr/share/ca-certificates/7294.pem (1338 bytes)
	I0511 23:20:13.089168  232884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem --> /usr/share/ca-certificates/72942.pem (1708 bytes)
	I0511 23:20:13.114882  232884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0511 23:20:13.130299  232884 ssh_runner.go:195] Run: openssl version
	I0511 23:20:13.135608  232884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7294.pem && ln -fs /usr/share/ca-certificates/7294.pem /etc/ssl/certs/7294.pem"
	I0511 23:20:13.144017  232884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7294.pem
	I0511 23:20:13.147544  232884 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 22:56 /usr/share/ca-certificates/7294.pem
	I0511 23:20:13.147604  232884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7294.pem
	I0511 23:20:13.152901  232884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7294.pem /etc/ssl/certs/51391683.0"
	I0511 23:20:13.163758  232884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72942.pem && ln -fs /usr/share/ca-certificates/72942.pem /etc/ssl/certs/72942.pem"
	I0511 23:20:13.174221  232884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72942.pem
	I0511 23:20:13.178799  232884 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 22:56 /usr/share/ca-certificates/72942.pem
	I0511 23:20:13.178866  232884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72942.pem
	I0511 23:20:13.185331  232884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72942.pem /etc/ssl/certs/3ec20f2e.0"
	I0511 23:20:13.195244  232884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0511 23:20:13.205174  232884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0511 23:20:13.209221  232884 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0511 23:20:13.209292  232884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0511 23:20:13.215649  232884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0511 23:20:13.223933  232884 kubeadm.go:391] StartCluster: {Name:custom-weave-20220511231549-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220511231549-7294 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false}
	I0511 23:20:13.224069  232884 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0511 23:20:13.256780  232884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0511 23:20:13.265471  232884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0511 23:20:13.275942  232884 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0511 23:20:13.276010  232884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0511 23:20:13.286516  232884 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0511 23:20:13.286569  232884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0511 23:20:25.425036  232884 out.go:204]   - Generating certificates and keys ...
	I0511 23:20:25.428308  232884 out.go:204]   - Booting up control plane ...
	I0511 23:20:25.431281  232884 out.go:204]   - Configuring RBAC rules ...
	I0511 23:20:25.433534  232884 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
	I0511 23:20:25.435519  232884 out.go:177] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0511 23:20:25.437387  232884 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0511 23:20:25.437445  232884 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0511 23:20:25.458945  232884 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0511 23:20:25.458982  232884 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0511 23:20:25.556685  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0511 23:20:26.733474  232884 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.176689415s)
	I0511 23:20:26.733551  232884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0511 23:20:26.733634  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:26.733651  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=custom-weave-20220511231549-7294 minikube.k8s.io/updated_at=2022_05_11T23_20_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:26.812693  232884 ops.go:34] apiserver oom_adj: -16
	I0511 23:20:26.812782  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:27.413439  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:27.912939  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:28.413569  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:28.913234  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:29.413608  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:29.912814  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:30.413384  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:30.913746  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:31.413455  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:31.913036  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:32.412872  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:32.913335  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:33.413669  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:33.913699  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:34.413174  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:34.913013  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:35.413670  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:35.912743  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:36.413327  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:36.913577  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:37.412817  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:37.913338  232884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:20:37.989800  232884 kubeadm.go:1020] duration metric: took 11.256219037s to wait for elevateKubeSystemPrivileges.
	I0511 23:20:37.989840  232884 kubeadm.go:393] StartCluster complete in 24.765918726s
	I0511 23:20:37.989862  232884 settings.go:142] acquiring lock: {Name:mk1287875a6024bfdfd8882975fa4d7c31d85e31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:37.990005  232884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 23:20:37.991430  232884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig: {Name:mka611e3c6ccae6ff6a6751a4f0fde8a6d2789a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:20:38.611793  232884 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220511231549-7294" rescaled to 1
	I0511 23:20:38.611861  232884 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 23:20:38.612032  232884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0511 23:20:38.612026  232884 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0511 23:20:38.743164  232884 out.go:177] * Verifying Kubernetes components...
	I0511 23:20:38.612196  232884 config.go:178] Loaded profile config "custom-weave-20220511231549-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:20:38.743248  232884 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220511231549-7294"
	I0511 23:20:38.743228  232884 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220511231549-7294"
	I0511 23:20:38.803518  232884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0511 23:20:38.975210  232884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 23:20:38.975238  232884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220511231549-7294"
	I0511 23:20:38.975255  232884 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220511231549-7294"
	W0511 23:20:38.975270  232884 addons.go:165] addon storage-provisioner should already be in state true
	I0511 23:20:38.975320  232884 host.go:66] Checking if "custom-weave-20220511231549-7294" exists ...
	I0511 23:20:38.975667  232884 cli_runner.go:164] Run: docker container inspect custom-weave-20220511231549-7294 --format={{.State.Status}}
	I0511 23:20:38.975845  232884 cli_runner.go:164] Run: docker container inspect custom-weave-20220511231549-7294 --format={{.State.Status}}
	I0511 23:20:38.988617  232884 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220511231549-7294" to be "Ready" ...
	I0511 23:20:39.214348  232884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0511 23:20:39.311678  232884 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220511231549-7294"
	W0511 23:20:39.349073  232884 addons.go:165] addon default-storageclass should already be in state true
	I0511 23:20:39.312142  232884 node_ready.go:49] node "custom-weave-20220511231549-7294" has status "Ready":"True"
	I0511 23:20:39.349108  232884 node_ready.go:38] duration metric: took 360.446141ms waiting for node "custom-weave-20220511231549-7294" to be "Ready" ...
	I0511 23:20:39.349136  232884 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 23:20:39.349216  232884 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 23:20:39.349238  232884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0511 23:20:39.349253  232884 host.go:66] Checking if "custom-weave-20220511231549-7294" exists ...
	I0511 23:20:39.349301  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:39.349789  232884 cli_runner.go:164] Run: docker container inspect custom-weave-20220511231549-7294 --format={{.State.Status}}
	I0511 23:20:39.358362  232884 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-42n8l" in "kube-system" namespace to be "Ready" ...
	I0511 23:20:39.387239  232884 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0511 23:20:39.387267  232884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0511 23:20:39.387320  232884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220511231549-7294
	I0511 23:20:39.387641  232884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa Username:docker}
	I0511 23:20:39.422015  232884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220511231549-7294/id_rsa Username:docker}
	I0511 23:20:39.477628  232884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 23:20:39.513124  232884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0511 23:20:39.561686  232884 start.go:815] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0511 23:20:40.883199  232884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.405518302s)
	I0511 23:20:40.883282  232884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.370124275s)
	I0511 23:20:41.005116  232884 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0511 23:20:41.198253  232884 addons.go:417] enableAddons completed in 2.586214279s
	I0511 23:20:41.600579  232884 pod_ready.go:102] pod "coredns-64897985d-42n8l" in "kube-system" namespace has status "Ready":"False"
	I0511 23:20:43.871764  232884 pod_ready.go:102] pod "coredns-64897985d-42n8l" in "kube-system" namespace has status "Ready":"False"
	I0511 23:20:46.371687  232884 pod_ready.go:102] pod "coredns-64897985d-42n8l" in "kube-system" namespace has status "Ready":"False"
	I0511 23:20:47.368106  232884 pod_ready.go:97] error getting pod "coredns-64897985d-42n8l" in "kube-system" namespace (skipping!): pods "coredns-64897985d-42n8l" not found
	I0511 23:20:47.368140  232884 pod_ready.go:81] duration metric: took 8.009749154s waiting for pod "coredns-64897985d-42n8l" in "kube-system" namespace to be "Ready" ...
	E0511 23:20:47.368152  232884 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-42n8l" in "kube-system" namespace (skipping!): pods "coredns-64897985d-42n8l" not found
	I0511 23:20:47.368162  232884 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-7lrgz" in "kube-system" namespace to be "Ready" ...
	I0511 23:20:49.380080  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:20:51.380258  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:20:53.380342  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:20:55.881025  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:20:57.883360  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:00.379669  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:02.379755  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:04.380523  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:06.879123  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:08.881272  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:11.379739  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:13.380879  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:15.880368  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:18.380290  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:20.381161  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:22.880262  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:25.380526  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:27.879864  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:29.880204  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:31.881873  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:34.379766  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:36.380131  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:38.380249  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:40.879765  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:42.880007  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:45.379740  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:47.879614  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:49.880097  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:52.379757  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:54.379866  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:56.393880  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:21:58.879048  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:00.879881  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:03.379819  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:05.880130  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:08.379936  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:10.381722  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:12.880993  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:15.380064  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:17.879779  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:19.880674  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:22.379718  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:24.379854  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:26.880625  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:29.380194  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:31.880856  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:34.379424  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:36.379933  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:38.879551  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:41.380292  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:43.380384  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:45.879800  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:48.379937  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:50.880552  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:53.379482  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:55.379794  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:57.879228  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:22:59.879921  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:02.380136  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:04.879458  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:07.379452  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:09.379975  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:11.879402  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:13.879854  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:15.880188  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:18.380317  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:20.879848  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:23.379777  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:25.879908  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:28.379842  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:30.380248  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:32.879956  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:34.881063  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:37.380111  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:39.879863  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:42.379606  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:44.878687  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:46.879120  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:48.879365  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:50.879791  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:53.380567  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:55.879493  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:23:57.880310  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:00.379498  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:02.382050  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:04.879095  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:06.879302  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:09.379212  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:11.379511  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:13.380115  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:15.380249  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:17.879469  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:20.379228  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:22.379706  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:24.879046  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:26.879088  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:29.380115  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:31.879729  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:34.380132  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:36.879721  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:39.379627  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:41.380332  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:43.380899  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:45.879671  232884 pod_ready.go:102] pod "coredns-64897985d-7lrgz" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:47.383660  232884 pod_ready.go:81] duration metric: took 4m0.015483882s waiting for pod "coredns-64897985d-7lrgz" in "kube-system" namespace to be "Ready" ...
	E0511 23:24:47.383685  232884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0511 23:24:47.383692  232884 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.387917  232884 pod_ready.go:92] pod "etcd-custom-weave-20220511231549-7294" in "kube-system" namespace has status "Ready":"True"
	I0511 23:24:47.387934  232884 pod_ready.go:81] duration metric: took 4.236506ms waiting for pod "etcd-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.387943  232884 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.392411  232884 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220511231549-7294" in "kube-system" namespace has status "Ready":"True"
	I0511 23:24:47.392432  232884 pod_ready.go:81] duration metric: took 4.483257ms waiting for pod "kube-apiserver-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.392442  232884 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.396781  232884 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220511231549-7294" in "kube-system" namespace has status "Ready":"True"
	I0511 23:24:47.396802  232884 pod_ready.go:81] duration metric: took 4.353152ms waiting for pod "kube-controller-manager-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.396815  232884 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-dg5qm" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.777815  232884 pod_ready.go:92] pod "kube-proxy-dg5qm" in "kube-system" namespace has status "Ready":"True"
	I0511 23:24:47.777842  232884 pod_ready.go:81] duration metric: took 381.01996ms waiting for pod "kube-proxy-dg5qm" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:47.777855  232884 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:48.177989  232884 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220511231549-7294" in "kube-system" namespace has status "Ready":"True"
	I0511 23:24:48.178013  232884 pod_ready.go:81] duration metric: took 400.150007ms waiting for pod "kube-scheduler-custom-weave-20220511231549-7294" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:48.178028  232884 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-6k758" in "kube-system" namespace to be "Ready" ...
	I0511 23:24:50.583499  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:52.583695  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:54.584428  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:57.083201  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:24:59.083476  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:01.583825  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:03.584561  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:06.083222  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:08.083460  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:10.583633  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:13.084017  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:15.584827  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:18.084830  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:20.583040  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:23.082931  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:25.089026  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:27.584575  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:30.084136  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:32.583582  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:34.583718  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:37.083322  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:39.583976  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:42.084091  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:44.583822  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:47.083693  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:49.583538  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:51.583798  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:53.584168  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:56.083610  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:25:58.083844  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:00.583989  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:03.083421  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:05.083734  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:07.583496  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:09.584422  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:12.084229  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:14.582994  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:17.083857  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:19.582888  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:22.083870  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:24.083993  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:26.583451  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:28.583963  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:31.082976  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:33.084004  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:35.084521  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:37.583786  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:40.090836  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:42.584395  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:45.083156  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:47.083880  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:49.582565  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:51.583829  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:53.584130  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:56.083277  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:26:58.083704  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:00.583260  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:02.584518  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:05.083041  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:07.083716  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:09.083749  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:11.084458  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:13.583421  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:15.583769  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:18.083228  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:20.083487  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:22.083622  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:24.085095  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:26.583295  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:29.082624  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:31.083182  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:33.083423  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:35.083576  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:37.583623  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:40.083933  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:42.084454  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:44.583207  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:46.583949  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:49.083949  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:51.583819  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:54.084269  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:56.583768  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:27:59.083816  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:01.583243  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:03.583561  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:06.083484  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:08.084743  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:10.583491  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:12.583707  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:14.584275  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:16.585153  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:19.083794  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:21.084048  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:23.583262  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:25.583801  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:28.083812  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:30.084635  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:32.583752  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:34.584042  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:37.084226  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:39.084606  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:41.583203  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:43.583552  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:46.083278  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:48.083674  232884 pod_ready.go:102] pod "weave-net-6k758" in "kube-system" namespace has status "Ready":"False"
	I0511 23:28:48.588446  232884 pod_ready.go:81] duration metric: took 4m0.410406966s waiting for pod "weave-net-6k758" in "kube-system" namespace to be "Ready" ...
	E0511 23:28:48.588472  232884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0511 23:28:48.588479  232884 pod_ready.go:38] duration metric: took 8m9.239317278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 23:28:48.588507  232884 api_server.go:51] waiting for apiserver process to appear ...
	I0511 23:28:48.591436  232884 out.go:177] 
	W0511 23:28:48.592980  232884 out.go:239] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0511 23:28:48.593074  232884 out.go:239] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0511 23:28:48.593101  232884 out.go:239] * Related issues:
	* Related issues:
	W0511 23:28:48.593150  232884 out.go:239]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0511 23:28:48.593211  232884 out.go:239]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0511 23:28:48.594743  232884 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (531.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (345.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.166068099s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:22:56.713444    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130741332s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134397571s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146834091s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138772711s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:24:01.146369    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:24:12.538024    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131289078s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:24:14.222963    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:14.228244    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:14.238526    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:14.258853    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:14.299208    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:14.379596    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:14.539987    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:14.860567    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:15.501614    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:16.782252    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:24:19.342923    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:24.463551    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:24:28.829418    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126353009s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:24:34.704453    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:24:51.976635    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:24:55.185166    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150584513s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142692803s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:26:01.053558    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:26:06.173800    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146401543s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:26:16.414511    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:26:36.895242    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128893822s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161287086s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kindnet/DNS (345.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (323s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:25:36.145714    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146916589s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:25:55.932227    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:55.937561    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:55.947826    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:55.968100    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:56.008403    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:56.088877    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:56.250005    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:56.570601    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:57.211584    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:25:58.492539    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142756791s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137557071s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14703725s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13486693s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:26:58.066076    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131397223s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:27:17.855804    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141022023s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133636577s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143813565s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140531717s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154821806s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135693947s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (323.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (336.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161410939s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140475964s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145219611s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129143108s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:27:56.714316    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138869976s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142706867s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:28:39.776047    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128901321s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142119149s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:29:41.906530    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127924054s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136207056s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151480452s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:32:12.546559    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:12.551833    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:12.562140    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:12.582423    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:12.622727    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:12.703077    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:12.863498    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:13.184051    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:13.824898    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:15.105565    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:15.588301    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 23:32:17.666544    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:22.787315    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16521985s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/auto/DNS (336.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (348.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:29:12.538063    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 23:29:14.222965    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144910562s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155251765s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12572698s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133827924s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134761156s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125289301s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:30:55.932170    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143253838s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:31:23.616971    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12878766s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139885914s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15234814s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:32:53.507984    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:32:56.713492    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:33:34.468745    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145127555s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:34:01.146316    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:34:12.538689    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 23:34:14.222624    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143847284s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (348.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (363.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:29:51.975716    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146775583s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130519914s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141260158s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131275678s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131669952s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13805226s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144340162s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125626341s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.335102305s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148676598s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143566078s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 23:34:51.975978    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:34:56.389585    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default
E0511 23:35:35.041547    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:45.281892    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220511231548-7294 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.197281132s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (363.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (516.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m36.132942813s)

                                                
                                                
-- stdout --
	* [calico-20220511231549-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node calico-20220511231549-7294 in cluster calico-20220511231549-7294
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 23:31:02.700999  312348 out.go:296] Setting OutFile to fd 1 ...
	I0511 23:31:02.701173  312348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:31:02.701181  312348 out.go:309] Setting ErrFile to fd 2...
	I0511 23:31:02.701187  312348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:31:02.701448  312348 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 23:31:02.701976  312348 out.go:303] Setting JSON to false
	I0511 23:31:02.703916  312348 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4405,"bootTime":1652307458,"procs":615,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 23:31:02.704002  312348 start.go:125] virtualization: kvm guest
	I0511 23:31:02.706809  312348 out.go:177] * [calico-20220511231549-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0511 23:31:02.708534  312348 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 23:31:02.708468  312348 notify.go:193] Checking for updates...
	I0511 23:31:02.711422  312348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 23:31:02.713198  312348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 23:31:02.714890  312348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 23:31:02.716366  312348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0511 23:31:02.718154  312348 config.go:178] Loaded profile config "auto-20220511231548-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:31:02.718278  312348 config.go:178] Loaded profile config "bridge-20220511231548-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:31:02.718362  312348 config.go:178] Loaded profile config "kubenet-20220511231548-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:31:02.718412  312348 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 23:31:02.760238  312348 docker.go:137] docker version: linux-20.10.15
	I0511 23:31:02.760347  312348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:31:02.873877  312348 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-11 23:31:02.792393501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:31:02.873996  312348 docker.go:254] overlay module found
	I0511 23:31:02.876462  312348 out.go:177] * Using the docker driver based on user configuration
	I0511 23:31:02.878075  312348 start.go:284] selected driver: docker
	I0511 23:31:02.878093  312348 start.go:801] validating driver "docker" against <nil>
	I0511 23:31:02.878147  312348 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 23:31:02.879193  312348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:31:02.989188  312348 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-11 23:31:02.911284952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:31:02.989355  312348 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 23:31:02.989579  312348 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0511 23:31:02.992098  312348 out.go:177] * Using Docker driver with the root privilege
	I0511 23:31:02.993727  312348 cni.go:95] Creating CNI manager for "calico"
	I0511 23:31:02.993749  312348 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0511 23:31:02.993775  312348 start_flags.go:306] config:
	{Name:calico-20220511231549-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220511231549-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 23:31:02.995618  312348 out.go:177] * Starting control plane node calico-20220511231549-7294 in cluster calico-20220511231549-7294
	I0511 23:31:02.997045  312348 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 23:31:02.998474  312348 out.go:177] * Pulling base image ...
	I0511 23:31:03.000001  312348 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 23:31:03.000044  312348 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 23:31:03.000054  312348 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 23:31:03.000179  312348 cache.go:57] Caching tarball of preloaded images
	I0511 23:31:03.000515  312348 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0511 23:31:03.000548  312348 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0511 23:31:03.000686  312348 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/config.json ...
	I0511 23:31:03.000729  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/config.json: {Name:mk1f05ba9f2358fe7094192961d9ab1c910435db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:03.048296  312348 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 23:31:03.048325  312348 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0511 23:31:03.048335  312348 cache.go:206] Successfully downloaded all kic artifacts
	I0511 23:31:03.048368  312348 start.go:352] acquiring machines lock for calico-20220511231549-7294: {Name:mk215cd04fa555f0ca354434f0473f815faa27cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 23:31:03.048503  312348 start.go:356] acquired machines lock for "calico-20220511231549-7294" in 111.232µs
	I0511 23:31:03.048528  312348 start.go:91] Provisioning new machine with config: &{Name:calico-20220511231549-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220511231549-7294 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 23:31:03.048616  312348 start.go:131] createHost starting for "" (driver="docker")
	I0511 23:31:03.051099  312348 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0511 23:31:03.051345  312348 start.go:165] libmachine.API.Create for "calico-20220511231549-7294" (driver="docker")
	I0511 23:31:03.051374  312348 client.go:168] LocalClient.Create starting
	I0511 23:31:03.051430  312348 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
	I0511 23:31:03.051459  312348 main.go:134] libmachine: Decoding PEM data...
	I0511 23:31:03.051480  312348 main.go:134] libmachine: Parsing certificate...
	I0511 23:31:03.051559  312348 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
	I0511 23:31:03.051592  312348 main.go:134] libmachine: Decoding PEM data...
	I0511 23:31:03.051610  312348 main.go:134] libmachine: Parsing certificate...
	I0511 23:31:03.051939  312348 cli_runner.go:164] Run: docker network inspect calico-20220511231549-7294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0511 23:31:03.086810  312348 cli_runner.go:211] docker network inspect calico-20220511231549-7294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0511 23:31:03.086905  312348 network_create.go:272] running [docker network inspect calico-20220511231549-7294] to gather additional debugging logs...
	I0511 23:31:03.086941  312348 cli_runner.go:164] Run: docker network inspect calico-20220511231549-7294
	W0511 23:31:03.120712  312348 cli_runner.go:211] docker network inspect calico-20220511231549-7294 returned with exit code 1
	I0511 23:31:03.120746  312348 network_create.go:275] error running [docker network inspect calico-20220511231549-7294]: docker network inspect calico-20220511231549-7294: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220511231549-7294
	I0511 23:31:03.120761  312348 network_create.go:277] output of [docker network inspect calico-20220511231549-7294]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220511231549-7294
	
	** /stderr **
	I0511 23:31:03.120825  312348 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 23:31:03.156512  312348 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-90e0a6f3db2a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e2:03:1b:18}}
	I0511 23:31:03.157202  312348 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-48613e5d8eae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:c3:f2:c1:16}}
	I0511 23:31:03.158574  312348 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0001173f8] misses:0}
	I0511 23:31:03.158635  312348 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 23:31:03.158675  312348 network_create.go:115] attempt to create docker network calico-20220511231549-7294 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0511 23:31:03.158999  312348 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220511231549-7294
	I0511 23:31:03.233607  312348 network_create.go:99] docker network calico-20220511231549-7294 192.168.67.0/24 created
	I0511 23:31:03.233660  312348 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220511231549-7294" container
	I0511 23:31:03.233731  312348 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0511 23:31:03.270544  312348 cli_runner.go:164] Run: docker volume create calico-20220511231549-7294 --label name.minikube.sigs.k8s.io=calico-20220511231549-7294 --label created_by.minikube.sigs.k8s.io=true
	I0511 23:31:03.305681  312348 oci.go:103] Successfully created a docker volume calico-20220511231549-7294
	I0511 23:31:03.305765  312348 cli_runner.go:164] Run: docker run --rm --name calico-20220511231549-7294-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220511231549-7294 --entrypoint /usr/bin/test -v calico-20220511231549-7294:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0511 23:31:03.895776  312348 oci.go:107] Successfully prepared a docker volume calico-20220511231549-7294
	I0511 23:31:03.895823  312348 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 23:31:03.895841  312348 kic.go:179] Starting extracting preloaded images to volume ...
	I0511 23:31:03.895906  312348 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220511231549-7294:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0511 23:31:10.667218  312348 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220511231549-7294:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (6.771244803s)
	I0511 23:31:10.667247  312348 kic.go:188] duration metric: took 6.771403 seconds to extract preloaded images to volume
	W0511 23:31:10.667397  312348 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0511 23:31:10.667488  312348 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0511 23:31:10.779801  312348 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220511231549-7294 --name calico-20220511231549-7294 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220511231549-7294 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220511231549-7294 --network calico-20220511231549-7294 --ip 192.168.67.2 --volume calico-20220511231549-7294:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0511 23:31:11.204149  312348 cli_runner.go:164] Run: docker container inspect calico-20220511231549-7294 --format={{.State.Running}}
	I0511 23:31:11.242576  312348 cli_runner.go:164] Run: docker container inspect calico-20220511231549-7294 --format={{.State.Status}}
	I0511 23:31:11.278783  312348 cli_runner.go:164] Run: docker exec calico-20220511231549-7294 stat /var/lib/dpkg/alternatives/iptables
	I0511 23:31:11.342455  312348 oci.go:247] the created container "calico-20220511231549-7294" has a running status.
	I0511 23:31:11.342491  312348 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa...
	I0511 23:31:11.691010  312348 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0511 23:31:11.781962  312348 cli_runner.go:164] Run: docker container inspect calico-20220511231549-7294 --format={{.State.Status}}
	I0511 23:31:11.821007  312348 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0511 23:31:11.821032  312348 kic_runner.go:114] Args: [docker exec --privileged calico-20220511231549-7294 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0511 23:31:11.916368  312348 cli_runner.go:164] Run: docker container inspect calico-20220511231549-7294 --format={{.State.Status}}
	I0511 23:31:11.949924  312348 machine.go:88] provisioning docker machine ...
	I0511 23:31:11.949961  312348 ubuntu.go:169] provisioning hostname "calico-20220511231549-7294"
	I0511 23:31:11.950036  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:11.983924  312348 main.go:134] libmachine: Using SSH client type: native
	I0511 23:31:11.984410  312348 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0511 23:31:11.984434  312348 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220511231549-7294 && echo "calico-20220511231549-7294" | sudo tee /etc/hostname
	I0511 23:31:12.108335  312348 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220511231549-7294
	
	I0511 23:31:12.108426  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:12.141199  312348 main.go:134] libmachine: Using SSH client type: native
	I0511 23:31:12.141379  312348 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0511 23:31:12.141410  312348 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220511231549-7294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220511231549-7294/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220511231549-7294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0511 23:31:12.254375  312348 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0511 23:31:12.254411  312348 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
	I0511 23:31:12.254450  312348 ubuntu.go:177] setting up certificates
	I0511 23:31:12.254465  312348 provision.go:83] configureAuth start
	I0511 23:31:12.254533  312348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220511231549-7294
	I0511 23:31:12.288039  312348 provision.go:138] copyHostCerts
	I0511 23:31:12.288093  312348 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
	I0511 23:31:12.288102  312348 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
	I0511 23:31:12.288177  312348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1078 bytes)
	I0511 23:31:12.288273  312348 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
	I0511 23:31:12.288285  312348 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
	I0511 23:31:12.288311  312348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
	I0511 23:31:12.288371  312348 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
	I0511 23:31:12.288381  312348 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
	I0511 23:31:12.288401  312348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
	I0511 23:31:12.288440  312348 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.calico-20220511231549-7294 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220511231549-7294]
	I0511 23:31:12.539124  312348 provision.go:172] copyRemoteCerts
	I0511 23:31:12.539181  312348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0511 23:31:12.539212  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:12.572929  312348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa Username:docker}
	I0511 23:31:12.658027  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0511 23:31:12.677154  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0511 23:31:12.695542  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0511 23:31:12.714162  312348 provision.go:86] duration metric: configureAuth took 459.679822ms
	I0511 23:31:12.714197  312348 ubuntu.go:193] setting minikube options for container-runtime
	I0511 23:31:12.714363  312348 config.go:178] Loaded profile config "calico-20220511231549-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:31:12.714408  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:12.748358  312348 main.go:134] libmachine: Using SSH client type: native
	I0511 23:31:12.748519  312348 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0511 23:31:12.748536  312348 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0511 23:31:12.858456  312348 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0511 23:31:12.858486  312348 ubuntu.go:71] root file system type: overlay
	I0511 23:31:12.858647  312348 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0511 23:31:12.858712  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:12.892298  312348 main.go:134] libmachine: Using SSH client type: native
	I0511 23:31:12.892477  312348 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0511 23:31:12.892580  312348 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0511 23:31:13.012012  312348 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0511 23:31:13.012096  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:13.044639  312348 main.go:134] libmachine: Using SSH client type: native
	I0511 23:31:13.044835  312348 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0511 23:31:13.044866  312348 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0511 23:31:13.698540  312348 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-11 23:31:13.006902465 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0511 23:31:13.698622  312348 machine.go:91] provisioned docker machine in 1.748675049s
	I0511 23:31:13.698643  312348 client.go:171] LocalClient.Create took 10.647263994s
	I0511 23:31:13.698666  312348 start.go:173] duration metric: libmachine.API.Create for "calico-20220511231549-7294" took 10.647321662s
	I0511 23:31:13.698697  312348 start.go:306] post-start starting for "calico-20220511231549-7294" (driver="docker")
	I0511 23:31:13.698717  312348 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0511 23:31:13.698781  312348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0511 23:31:13.698857  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:13.731998  312348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa Username:docker}
	I0511 23:31:13.813882  312348 ssh_runner.go:195] Run: cat /etc/os-release
	I0511 23:31:13.816657  312348 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0511 23:31:13.816684  312348 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0511 23:31:13.816693  312348 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0511 23:31:13.816701  312348 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0511 23:31:13.816717  312348 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
	I0511 23:31:13.816764  312348 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
	I0511 23:31:13.816836  312348 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem -> 72942.pem in /etc/ssl/certs
	I0511 23:31:13.816907  312348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0511 23:31:13.824340  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem --> /etc/ssl/certs/72942.pem (1708 bytes)
	I0511 23:31:13.842278  312348 start.go:309] post-start completed in 143.549766ms
	I0511 23:31:13.842670  312348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220511231549-7294
	I0511 23:31:13.875553  312348 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/config.json ...
	I0511 23:31:13.875811  312348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 23:31:13.875848  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:13.909890  312348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa Username:docker}
	I0511 23:31:13.990775  312348 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0511 23:31:13.995061  312348 start.go:134] duration metric: createHost completed in 10.946433357s
	I0511 23:31:13.995088  312348 start.go:81] releasing machines lock for "calico-20220511231549-7294", held for 10.946571929s
	I0511 23:31:13.995182  312348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220511231549-7294
	I0511 23:31:14.028601  312348 ssh_runner.go:195] Run: systemctl --version
	I0511 23:31:14.028632  312348 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0511 23:31:14.028669  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:14.028685  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:14.063665  312348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa Username:docker}
	I0511 23:31:14.064148  312348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa Username:docker}
	I0511 23:31:14.172314  312348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0511 23:31:14.182444  312348 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 23:31:14.191741  312348 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0511 23:31:14.191793  312348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0511 23:31:14.201197  312348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0511 23:31:14.216445  312348 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0511 23:31:14.293290  312348 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0511 23:31:14.378921  312348 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 23:31:14.389891  312348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0511 23:31:14.493780  312348 ssh_runner.go:195] Run: sudo systemctl start docker
	I0511 23:31:14.503625  312348 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 23:31:14.542819  312348 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 23:31:14.584693  312348 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0511 23:31:14.584790  312348 cli_runner.go:164] Run: docker network inspect calico-20220511231549-7294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 23:31:14.616964  312348 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0511 23:31:14.620391  312348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 23:31:14.630155  312348 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 23:31:14.630223  312348 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 23:31:14.662198  312348 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 23:31:14.662222  312348 docker.go:541] Images already preloaded, skipping extraction
	I0511 23:31:14.662283  312348 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 23:31:14.695430  312348 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 23:31:14.695463  312348 cache_images.go:84] Images are preloaded, skipping loading
	I0511 23:31:14.695539  312348 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0511 23:31:14.781615  312348 cni.go:95] Creating CNI manager for "calico"
	I0511 23:31:14.781662  312348 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0511 23:31:14.781682  312348 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220511231549-7294 NodeName:calico-20220511231549-7294 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0511 23:31:14.781841  312348 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220511231549-7294"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0511 23:31:14.781921  312348 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220511231549-7294 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220511231549-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0511 23:31:14.781970  312348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0511 23:31:14.789584  312348 binaries.go:44] Found k8s binaries, skipping transfer
	I0511 23:31:14.789644  312348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0511 23:31:14.796925  312348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0511 23:31:14.810496  312348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0511 23:31:14.825117  312348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0511 23:31:14.839020  312348 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0511 23:31:14.842128  312348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 23:31:14.852106  312348 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294 for IP: 192.168.67.2
	I0511 23:31:14.852215  312348 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
	I0511 23:31:14.852252  312348 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
	I0511 23:31:14.852296  312348 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/client.key
	I0511 23:31:14.852308  312348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/client.crt with IP's: []
	I0511 23:31:15.091660  312348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/client.crt ...
	I0511 23:31:15.091689  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/client.crt: {Name:mka03bf371cf116c94d6eab6021e85772a6ee72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:15.091888  312348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/client.key ...
	I0511 23:31:15.091903  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/client.key: {Name:mk5fa95d110971c90e1610f2664dfe5e1156f044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:15.091990  312348 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.key.c7fa3a9e
	I0511 23:31:15.092009  312348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0511 23:31:15.252322  312348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.crt.c7fa3a9e ...
	I0511 23:31:15.252351  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.crt.c7fa3a9e: {Name:mk71f1eb6b7ccb87f8d58a3e80f86fbb4eb8d13f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:15.252536  312348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.key.c7fa3a9e ...
	I0511 23:31:15.252551  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.key.c7fa3a9e: {Name:mk588664f3049dd024a946b76a78bd0354ad2a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:15.252633  312348 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.crt
	I0511 23:31:15.252731  312348 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.key
	I0511 23:31:15.252788  312348 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.key
	I0511 23:31:15.252802  312348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.crt with IP's: []
	I0511 23:31:15.371782  312348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.crt ...
	I0511 23:31:15.371815  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.crt: {Name:mk0a317630f6430ede066e4067e088839d719f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:15.372014  312348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.key ...
	I0511 23:31:15.372028  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.key: {Name:mk7f509d6099139bc579eaf49f94fd83ede40858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:15.372186  312348 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294.pem (1338 bytes)
	W0511 23:31:15.372230  312348 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294_empty.pem, impossibly tiny 0 bytes
	I0511 23:31:15.372244  312348 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
	I0511 23:31:15.372280  312348 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1078 bytes)
	I0511 23:31:15.372307  312348 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
	I0511 23:31:15.372331  312348 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
	I0511 23:31:15.372373  312348 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem (1708 bytes)
	I0511 23:31:15.372893  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0511 23:31:15.391766  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0511 23:31:15.409947  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0511 23:31:15.428012  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511231549-7294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0511 23:31:15.446313  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0511 23:31:15.465186  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0511 23:31:15.484220  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0511 23:31:15.502671  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0511 23:31:15.521193  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/72942.pem --> /usr/share/ca-certificates/72942.pem (1708 bytes)
	I0511 23:31:15.539528  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0511 23:31:15.557970  312348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/7294.pem --> /usr/share/ca-certificates/7294.pem (1338 bytes)
	I0511 23:31:15.576870  312348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0511 23:31:15.591012  312348 ssh_runner.go:195] Run: openssl version
	I0511 23:31:15.596084  312348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7294.pem && ln -fs /usr/share/ca-certificates/7294.pem /etc/ssl/certs/7294.pem"
	I0511 23:31:15.603647  312348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7294.pem
	I0511 23:31:15.606823  312348 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 22:56 /usr/share/ca-certificates/7294.pem
	I0511 23:31:15.606878  312348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7294.pem
	I0511 23:31:15.612221  312348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7294.pem /etc/ssl/certs/51391683.0"
	I0511 23:31:15.620293  312348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72942.pem && ln -fs /usr/share/ca-certificates/72942.pem /etc/ssl/certs/72942.pem"
	I0511 23:31:15.628006  312348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72942.pem
	I0511 23:31:15.631391  312348 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 22:56 /usr/share/ca-certificates/72942.pem
	I0511 23:31:15.631455  312348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72942.pem
	I0511 23:31:15.636568  312348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72942.pem /etc/ssl/certs/3ec20f2e.0"
	I0511 23:31:15.644526  312348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0511 23:31:15.652583  312348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0511 23:31:15.656452  312348 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0511 23:31:15.656515  312348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0511 23:31:15.661545  312348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0511 23:31:15.670102  312348 kubeadm.go:391] StartCluster: {Name:calico-20220511231549-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220511231549-7294 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false}
	I0511 23:31:15.670248  312348 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0511 23:31:15.702283  312348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0511 23:31:15.709829  312348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0511 23:31:15.717560  312348 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0511 23:31:15.717611  312348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0511 23:31:15.724743  312348 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0511 23:31:15.724795  312348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0511 23:31:16.246177  312348 out.go:204]   - Generating certificates and keys ...
	I0511 23:31:17.831931  312348 out.go:204]   - Booting up control plane ...
	I0511 23:31:25.378585  312348 out.go:204]   - Configuring RBAC rules ...
	I0511 23:31:25.792829  312348 cni.go:95] Creating CNI manager for "calico"
	I0511 23:31:25.795072  312348 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0511 23:31:25.796725  312348 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0511 23:31:25.796754  312348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0511 23:31:25.870863  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0511 23:31:27.375980  312348 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.505078059s)
	I0511 23:31:27.376092  312348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0511 23:31:27.376195  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:27.376198  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=calico-20220511231549-7294 minikube.k8s.io/updated_at=2022_05_11T23_31_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:27.383780  312348 ops.go:34] apiserver oom_adj: -16
	I0511 23:31:27.473754  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:28.071197  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:28.570844  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:29.070884  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:29.571329  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:30.071384  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:30.571361  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:31.070737  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:31.571140  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:32.071380  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:32.571306  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:33.071337  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:33.570739  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:34.070812  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:34.570858  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:35.071177  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:35.571031  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:36.071302  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:36.570866  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:37.071334  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:37.570537  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:38.070702  312348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 23:31:38.135439  312348 kubeadm.go:1020] duration metric: took 10.759308949s to wait for elevateKubeSystemPrivileges.
	I0511 23:31:38.135467  312348 kubeadm.go:393] StartCluster complete in 22.465375708s
	I0511 23:31:38.135484  312348 settings.go:142] acquiring lock: {Name:mk1287875a6024bfdfd8882975fa4d7c31d85e31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:38.135572  312348 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 23:31:38.137020  312348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig: {Name:mka611e3c6ccae6ff6a6751a4f0fde8a6d2789a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 23:31:38.658339  312348 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220511231549-7294" rescaled to 1
	I0511 23:31:38.658398  312348 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 23:31:38.660575  312348 out.go:177] * Verifying Kubernetes components...
	I0511 23:31:38.658478  312348 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0511 23:31:38.658490  312348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0511 23:31:38.658695  312348 config.go:178] Loaded profile config "calico-20220511231549-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:31:38.662221  312348 addons.go:65] Setting storage-provisioner=true in profile "calico-20220511231549-7294"
	I0511 23:31:38.662239  312348 addons.go:65] Setting default-storageclass=true in profile "calico-20220511231549-7294"
	I0511 23:31:38.662255  312348 addons.go:153] Setting addon storage-provisioner=true in "calico-20220511231549-7294"
	I0511 23:31:38.662261  312348 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220511231549-7294"
	W0511 23:31:38.662269  312348 addons.go:165] addon storage-provisioner should already be in state true
	I0511 23:31:38.662315  312348 host.go:66] Checking if "calico-20220511231549-7294" exists ...
	I0511 23:31:38.662257  312348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 23:31:38.662604  312348 cli_runner.go:164] Run: docker container inspect calico-20220511231549-7294 --format={{.State.Status}}
	I0511 23:31:38.662836  312348 cli_runner.go:164] Run: docker container inspect calico-20220511231549-7294 --format={{.State.Status}}
	I0511 23:31:38.675414  312348 node_ready.go:35] waiting up to 5m0s for node "calico-20220511231549-7294" to be "Ready" ...
	I0511 23:31:38.679489  312348 node_ready.go:49] node "calico-20220511231549-7294" has status "Ready":"True"
	I0511 23:31:38.679535  312348 node_ready.go:38] duration metric: took 4.086508ms waiting for node "calico-20220511231549-7294" to be "Ready" ...
	I0511 23:31:38.679547  312348 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 23:31:38.690094  312348 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace to be "Ready" ...
	I0511 23:31:38.709515  312348 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0511 23:31:38.711060  312348 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 23:31:38.711087  312348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0511 23:31:38.711151  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:38.714412  312348 addons.go:153] Setting addon default-storageclass=true in "calico-20220511231549-7294"
	W0511 23:31:38.714441  312348 addons.go:165] addon default-storageclass should already be in state true
	I0511 23:31:38.714473  312348 host.go:66] Checking if "calico-20220511231549-7294" exists ...
	I0511 23:31:38.714965  312348 cli_runner.go:164] Run: docker container inspect calico-20220511231549-7294 --format={{.State.Status}}
	I0511 23:31:38.747566  312348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa Username:docker}
	I0511 23:31:38.750558  312348 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0511 23:31:38.750587  312348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0511 23:31:38.750647  312348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511231549-7294
	I0511 23:31:38.779623  312348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0511 23:31:38.800457  312348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511231549-7294/id_rsa Username:docker}
	I0511 23:31:38.878824  312348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 23:31:39.079826  312348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0511 23:31:40.765058  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:41.264454  312348 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.484791276s)
	I0511 23:31:41.264491  312348 start.go:815] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0511 23:31:41.572081  312348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.693144021s)
	I0511 23:31:41.572147  312348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.492291398s)
	I0511 23:31:41.573819  312348 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0511 23:31:41.575404  312348 addons.go:417] enableAddons completed in 2.916938331s
	I0511 23:31:43.202632  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:45.203715  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:47.204640  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:49.323379  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:51.703691  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:54.203176  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:56.203540  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:31:58.703390  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:00.703435  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:03.203756  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:05.702328  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:08.257190  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:10.703782  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:13.202857  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:15.702874  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:17.703425  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:19.703894  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:22.203371  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:24.704556  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:27.202469  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:29.203024  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:31.703087  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:34.202050  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:36.203311  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:38.703034  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:40.703432  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:43.203149  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:45.707319  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:48.206815  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:50.702983  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:52.703518  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:55.202377  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:57.203004  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:32:59.203464  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:01.204017  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:03.702879  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:05.703204  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:08.204187  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:10.704035  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:13.202993  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:15.703587  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:18.203317  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:20.703261  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:23.203437  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:25.703690  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:28.204043  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:30.204831  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:32.703003  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:34.703137  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:37.202590  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:39.203618  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:41.703433  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:44.202713  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:46.204017  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:48.703056  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:50.703185  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:53.202429  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:55.703140  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:33:58.203478  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:00.702620  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:02.703388  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:04.757066  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:07.202770  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:09.203253  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:11.703156  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:13.703657  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:15.703724  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:17.704080  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:20.203030  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:22.205486  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:24.702571  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:26.702730  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:28.759366  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:31.202386  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:33.203218  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:35.703031  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:37.704234  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:40.202774  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:42.203016  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:44.203129  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:46.203744  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:48.701895  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:51.203404  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:53.203598  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:55.703431  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:34:58.202519  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:00.203414  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:02.701731  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:04.711637  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:07.206785  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:09.703394  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:12.202717  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:14.205842  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:16.702917  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:18.704037  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:21.203562  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:23.204149  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:25.703097  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:27.703438  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:30.202691  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:32.203291  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:34.703850  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:36.704306  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:38.709556  312348 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:38.709585  312348 pod_ready.go:81] duration metric: took 4m0.019418218s waiting for pod "calico-kube-controllers-8594699699-h92cv" in "kube-system" namespace to be "Ready" ...
	E0511 23:35:38.709596  312348 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0511 23:35:38.709607  312348 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-ztv22" in "kube-system" namespace to be "Ready" ...
	I0511 23:35:40.760001  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:43.225247  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:45.725866  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:48.225562  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:50.725721  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:53.226512  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:55.725216  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:35:58.224350  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:00.226566  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:02.257053  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:04.725447  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:06.725872  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:09.260646  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:11.757243  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:14.225846  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:16.257087  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:18.763724  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:21.226714  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:23.726053  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:26.226475  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:28.761374  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:30.762632  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:33.225303  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:35.724281  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:37.724929  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:39.725000  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:42.225188  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:44.225704  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:46.225804  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:48.764068  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:51.224167  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:53.224717  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:55.226045  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:36:57.725376  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:00.259908  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:02.756276  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:04.761045  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:07.223947  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:09.257167  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:11.261404  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:13.724924  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:15.760481  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:18.224479  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:20.260040  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:22.724891  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:24.756194  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:27.224214  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:29.260170  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:31.724333  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:33.761339  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:36.224238  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:38.725126  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:40.761727  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:43.226210  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:45.258334  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:47.725467  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:50.256271  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:52.258078  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:54.724538  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:56.760494  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:37:58.761887  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:01.225096  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:03.225736  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:05.227089  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:07.262051  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:09.725298  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:11.726105  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:13.759932  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:16.226358  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:18.725514  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:20.725665  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:22.756323  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:25.225476  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:27.259983  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:29.724188  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:31.760565  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:34.225219  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:36.725292  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:39.256221  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:41.725924  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:44.225584  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:46.225734  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:48.762407  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:51.225843  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:53.757064  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:56.225368  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:38:58.257047  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:00.725282  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:03.261483  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:05.726328  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:07.756400  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:10.224398  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:12.224698  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:14.257935  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:16.725048  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:18.756246  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:20.756337  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:23.225179  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:25.225285  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:27.225686  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:29.260926  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:31.726747  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:33.762195  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:36.225493  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:38.725705  312348 pod_ready.go:102] pod "calico-node-ztv22" in "kube-system" namespace has status "Ready":"False"
	I0511 23:39:38.761074  312348 pod_ready.go:81] duration metric: took 4m0.051453447s waiting for pod "calico-node-ztv22" in "kube-system" namespace to be "Ready" ...
	E0511 23:39:38.761097  312348 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0511 23:39:38.761113  312348 pod_ready.go:38] duration metric: took 8m0.081555446s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 23:39:38.763089  312348 out.go:177] 
	W0511 23:39:38.764549  312348 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0511 23:39:38.764571  312348 out.go:239] * 
	* 
	W0511 23:39:38.765473  312348 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0511 23:39:38.766738  312348 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (516.17s)
E0511 23:43:58.027346    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:44:01.147268    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:44:12.538253    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 23:44:14.222324    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:44:25.712126    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:44:34.180887    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory

                                                
                                    

Test pass (254/283)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.55
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.23.5/json-events 5.24
11 TestDownloadOnly/v1.23.5/preload-exists 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.08
17 TestDownloadOnly/v1.23.6-rc.0/json-events 4.44
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.33
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
25 TestDownloadOnlyKic 2.58
26 TestBinaryMirror 0.89
27 TestOffline 100.19
29 TestAddons/Setup 94.73
31 TestAddons/parallel/Registry 13.46
32 TestAddons/parallel/Ingress 28.28
33 TestAddons/parallel/MetricsServer 5.54
34 TestAddons/parallel/HelmTiller 8.67
36 TestAddons/parallel/CSI 40.23
38 TestAddons/serial/GCPAuth 37.9
39 TestAddons/StoppedEnableDisable 11.09
40 TestCertOptions 31.85
41 TestCertExpiration 219.07
42 TestDockerFlags 236.64
43 TestForceSystemdFlag 46.77
44 TestForceSystemdEnv 39.1
45 TestKVMDriverInstallOrUpdate 1.7
49 TestErrorSpam/setup 24.58
50 TestErrorSpam/start 1.02
51 TestErrorSpam/status 1.17
52 TestErrorSpam/pause 1.5
53 TestErrorSpam/unpause 1.57
54 TestErrorSpam/stop 10.99
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 41.66
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 5.43
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.06
65 TestFunctional/serial/CacheCmd/cache/add_remote 2.46
66 TestFunctional/serial/CacheCmd/cache/add_local 0.84
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
68 TestFunctional/serial/CacheCmd/cache/list 0.07
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.54
70 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
71 TestFunctional/serial/CacheCmd/cache/delete 0.13
72 TestFunctional/serial/MinikubeKubectlCmd 0.24
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 23.84
76 TestFunctional/serial/LogsCmd 1.33
77 TestFunctional/serial/LogsFileCmd 1.37
79 TestFunctional/parallel/ConfigCmd 0.5
80 TestFunctional/parallel/DashboardCmd 18.4
81 TestFunctional/parallel/DryRun 1.65
82 TestFunctional/parallel/InternationalLanguage 0.36
83 TestFunctional/parallel/StatusCmd 1.61
86 TestFunctional/parallel/ServiceCmd 13.76
87 TestFunctional/parallel/ServiceCmdConnect 9.84
88 TestFunctional/parallel/AddonsCmd 0.22
89 TestFunctional/parallel/PersistentVolumeClaim 24.57
91 TestFunctional/parallel/SSHCmd 0.84
92 TestFunctional/parallel/CpCmd 1.84
93 TestFunctional/parallel/MySQL 26.73
94 TestFunctional/parallel/FileSync 0.52
95 TestFunctional/parallel/CertSync 2.64
99 TestFunctional/parallel/NodeLabels 0.06
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
103 TestFunctional/parallel/Version/short 0.08
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
105 TestFunctional/parallel/Version/components 1.81
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
110 TestFunctional/parallel/ImageCommands/ImageBuild 2.76
111 TestFunctional/parallel/ImageCommands/Setup 1.04
112 TestFunctional/parallel/ProfileCmd/profile_list 0.54
113 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.38
118 TestFunctional/parallel/DockerEnv/bash 1.94
119 TestFunctional/parallel/MountCmd/any-port 13.51
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.4
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.5
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.3
123 TestFunctional/parallel/MountCmd/specific-port 1.91
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.27
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.95
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.86
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.55
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/delete_addon-resizer_images 0.1
138 TestFunctional/delete_my-image_image 0.03
139 TestFunctional/delete_minikube_cached_images 0.03
142 TestIngressAddonLegacy/StartLegacyK8sCluster 51.3
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.17
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
146 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.15
149 TestJSONOutput/start/Command 39.9
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.66
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.6
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 10.86
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.31
174 TestKicCustomNetwork/create_custom_network 26.93
175 TestKicCustomNetwork/use_default_bridge_network 26.23
176 TestKicExistingNetwork 27.03
177 TestKicCustomSubnet 26.76
178 TestMainNoArgs 0.06
181 TestMountStart/serial/StartWithMountFirst 5.6
182 TestMountStart/serial/VerifyMountFirst 0.35
183 TestMountStart/serial/StartWithMountSecond 5.59
184 TestMountStart/serial/VerifyMountSecond 0.35
185 TestMountStart/serial/DeleteFirst 1.74
186 TestMountStart/serial/VerifyMountPostDelete 0.34
187 TestMountStart/serial/Stop 1.27
188 TestMountStart/serial/RestartStopped 6.58
189 TestMountStart/serial/VerifyMountPostStop 0.34
192 TestMultiNode/serial/FreshStart2Nodes 71.94
193 TestMultiNode/serial/DeployApp2Nodes 3.94
194 TestMultiNode/serial/PingHostFrom2Pods 0.86
195 TestMultiNode/serial/AddNode 26.39
196 TestMultiNode/serial/ProfileList 0.37
197 TestMultiNode/serial/CopyFile 12.25
198 TestMultiNode/serial/StopNode 2.51
199 TestMultiNode/serial/StartAfterStop 25.2
200 TestMultiNode/serial/RestartKeepsNodes 105.57
201 TestMultiNode/serial/DeleteNode 5.3
202 TestMultiNode/serial/StopMultiNode 21.78
203 TestMultiNode/serial/RestartMultiNode 59.86
204 TestMultiNode/serial/ValidateNameConflict 27.76
209 TestPreload 117.87
211 TestScheduledStopUnix 97.81
212 TestSkaffold 54.88
214 TestInsufficientStorage 13.45
215 TestRunningBinaryUpgrade 64.22
217 TestKubernetesUpgrade 88.91
218 TestMissingContainerUpgrade 108.14
220 TestStoppedBinaryUpgrade/Setup 0.41
221 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
222 TestNoKubernetes/serial/StartWithK8s 43.6
223 TestStoppedBinaryUpgrade/Upgrade 83.54
224 TestNoKubernetes/serial/StartWithStopK8s 19.71
225 TestNoKubernetes/serial/Start 6.03
226 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
227 TestNoKubernetes/serial/ProfileList 1.88
228 TestNoKubernetes/serial/Stop 1.32
229 TestNoKubernetes/serial/StartNoArgs 6.16
230 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
250 TestPause/serial/Start 50.85
251 TestStoppedBinaryUpgrade/MinikubeLogs 1.78
252 TestPause/serial/SecondStartNoReconfiguration 6.7
253 TestPause/serial/Pause 1.13
254 TestPause/serial/VerifyStatus 0.58
255 TestPause/serial/Unpause 0.7
256 TestPause/serial/PauseAgain 0.91
257 TestPause/serial/DeletePaused 2.5
258 TestPause/serial/VerifyDeletedResources 0.7
259 TestNetworkPlugins/group/auto/Start 494.04
260 TestNetworkPlugins/group/false/Start 46.42
261 TestNetworkPlugins/group/false/KubeletFlags 0.36
262 TestNetworkPlugins/group/false/NetCatPod 9.21
263 TestNetworkPlugins/group/false/DNS 0.14
264 TestNetworkPlugins/group/false/Localhost 0.12
265 TestNetworkPlugins/group/false/HairPin 5.14
266 TestNetworkPlugins/group/cilium/Start 84.87
268 TestNetworkPlugins/group/enable-default-cni/Start 290.1
269 TestNetworkPlugins/group/cilium/ControllerPod 5.02
270 TestNetworkPlugins/group/cilium/KubeletFlags 0.35
271 TestNetworkPlugins/group/cilium/NetCatPod 11.88
272 TestNetworkPlugins/group/cilium/DNS 0.2
273 TestNetworkPlugins/group/cilium/Localhost 0.19
274 TestNetworkPlugins/group/cilium/HairPin 0.14
275 TestNetworkPlugins/group/kindnet/Start 55.99
276 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
277 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
278 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
280 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
281 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
283 TestNetworkPlugins/group/auto/KubeletFlags 0.4
284 TestNetworkPlugins/group/auto/NetCatPod 10.48
286 TestNetworkPlugins/group/bridge/Start 39.88
287 TestNetworkPlugins/group/kubenet/Start 39.97
288 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
289 TestNetworkPlugins/group/bridge/NetCatPod 12.21
291 TestNetworkPlugins/group/kubenet/KubeletFlags 0.39
292 TestNetworkPlugins/group/kubenet/NetCatPod 11.19
296 TestStartStop/group/old-k8s-version/serial/FirstStart 313.45
298 TestStartStop/group/no-preload/serial/FirstStart 54.78
300 TestStartStop/group/embed-certs/serial/FirstStart 288.89
301 TestStartStop/group/no-preload/serial/DeployApp 9.25
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.72
303 TestStartStop/group/no-preload/serial/Stop 11.01
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
305 TestStartStop/group/no-preload/serial/SecondStart 338.18
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.34
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.62
308 TestStartStop/group/old-k8s-version/serial/Stop 11.02
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
310 TestStartStop/group/old-k8s-version/serial/SecondStart 605.25
312 TestStartStop/group/default-k8s-different-port/serial/FirstStart 290.8
313 TestStartStop/group/embed-certs/serial/DeployApp 7.3
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.61
315 TestStartStop/group/embed-certs/serial/Stop 10.91
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/embed-certs/serial/SecondStart 572.02
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
321 TestStartStop/group/no-preload/serial/Pause 3.14
323 TestStartStop/group/newest-cni/serial/FirstStart 38.71
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
326 TestStartStop/group/newest-cni/serial/Stop 10.93
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
328 TestStartStop/group/newest-cni/serial/SecondStart 20.53
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.47
332 TestStartStop/group/newest-cni/serial/Pause 3.35
333 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.43
334 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.66
335 TestStartStop/group/default-k8s-different-port/serial/Stop 10.83
336 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.21
337 TestStartStop/group/default-k8s-different-port/serial/SecondStart 570.72
338 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
339 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
340 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.4
341 TestStartStop/group/old-k8s-version/serial/Pause 3.16
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
345 TestStartStop/group/embed-certs/serial/Pause 3.12
346 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
347 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.07
348 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.39
349 TestStartStop/group/default-k8s-different-port/serial/Pause 3.14
x
+
TestDownloadOnly/v1.16.0/json-events (6.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220511225217-7294 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220511225217-7294 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.547003206s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220511225217-7294
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220511225217-7294: exit status 85 (81.006542ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 22:52:17
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 22:52:17.179301    7306 out.go:296] Setting OutFile to fd 1 ...
	I0511 22:52:17.179405    7306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:52:17.179413    7306 out.go:309] Setting ErrFile to fd 2...
	I0511 22:52:17.179418    7306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:52:17.179518    7306 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	W0511 22:52:17.179642    7306 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: no such file or directory
	I0511 22:52:17.179863    7306 out.go:303] Setting JSON to true
	I0511 22:52:17.180662    7306 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2079,"bootTime":1652307458,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 22:52:17.180739    7306 start.go:125] virtualization: kvm guest
	I0511 22:52:17.183299    7306 out.go:97] [download-only-20220511225217-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0511 22:52:17.183412    7306 notify.go:193] Checking for updates...
	W0511 22:52:17.183432    7306 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball: no such file or directory
	I0511 22:52:17.185074    7306 out.go:169] MINIKUBE_LOCATION=13639
	I0511 22:52:17.186865    7306 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 22:52:17.188515    7306 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 22:52:17.189977    7306 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 22:52:17.191579    7306 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0511 22:52:17.195525    7306 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 22:52:17.195709    7306 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 22:52:17.229439    7306 docker.go:137] docker version: linux-20.10.15
	I0511 22:52:17.229519    7306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:52:17.955373    7306 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-11 22:52:17.256072946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:52:17.955473    7306 docker.go:254] overlay module found
	I0511 22:52:17.957501    7306 out.go:97] Using the docker driver based on user configuration
	I0511 22:52:17.957524    7306 start.go:284] selected driver: docker
	I0511 22:52:17.957531    7306 start.go:801] validating driver "docker" against <nil>
	I0511 22:52:17.957708    7306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:52:18.061084    7306 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-11 22:52:17.983198974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:52:18.061200    7306 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 22:52:18.061623    7306 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0511 22:52:18.061717    7306 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0511 22:52:18.063955    7306 out.go:169] Using Docker driver with the root privilege
	I0511 22:52:18.065375    7306 cni.go:95] Creating CNI manager for ""
	I0511 22:52:18.065392    7306 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:52:18.065399    7306 start_flags.go:306] config:
	{Name:download-only-20220511225217-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220511225217-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:52:18.066949    7306 out.go:97] Starting control plane node download-only-20220511225217-7294 in cluster download-only-20220511225217-7294
	I0511 22:52:18.066971    7306 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 22:52:18.068180    7306 out.go:97] Pulling base image ...
	I0511 22:52:18.068210    7306 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0511 22:52:18.068357    7306 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 22:52:18.102607    7306 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0511 22:52:18.102635    7306 cache.go:57] Caching tarball of preloaded images
	I0511 22:52:18.102889    7306 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0511 22:52:18.105270    7306 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0511 22:52:18.105296    7306 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:52:18.110175    7306 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 22:52:18.110194    7306 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 22:52:18.110338    7306 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 22:52:18.110425    7306 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 22:52:18.139698    7306 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0511 22:52:20.919610    7306 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:52:20.919681    7306 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:52:21.658636    7306 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0511 22:52:21.659032    7306 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/download-only-20220511225217-7294/config.json ...
	I0511 22:52:21.659065    7306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/download-only-20220511225217-7294/config.json: {Name:mk7f5e6a2ac709fa1b506e1061210d0bd74efc20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 22:52:21.659243    7306 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0511 22:52:21.659481    7306 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511225217-7294"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (5.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220511225217-7294 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220511225217-7294 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.237031862s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (5.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220511225217-7294
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220511225217-7294: exit status 85 (78.966903ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 22:52:23
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 22:52:23.810891    7472 out.go:296] Setting OutFile to fd 1 ...
	I0511 22:52:23.811064    7472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:52:23.811077    7472 out.go:309] Setting ErrFile to fd 2...
	I0511 22:52:23.811084    7472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:52:23.811203    7472 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	W0511 22:52:23.811356    7472 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: no such file or directory
	I0511 22:52:23.811493    7472 out.go:303] Setting JSON to true
	I0511 22:52:23.812287    7472 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2086,"bootTime":1652307458,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 22:52:23.812346    7472 start.go:125] virtualization: kvm guest
	I0511 22:52:23.814761    7472 out.go:97] [download-only-20220511225217-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0511 22:52:23.814879    7472 notify.go:193] Checking for updates...
	I0511 22:52:23.816494    7472 out.go:169] MINIKUBE_LOCATION=13639
	I0511 22:52:23.818095    7472 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 22:52:23.819566    7472 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 22:52:23.820979    7472 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 22:52:23.822500    7472 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0511 22:52:23.825478    7472 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 22:52:23.825917    7472 config.go:178] Loaded profile config "download-only-20220511225217-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0511 22:52:23.825969    7472 start.go:709] api.Load failed for download-only-20220511225217-7294: filestore "download-only-20220511225217-7294": Docker machine "download-only-20220511225217-7294" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 22:52:23.826015    7472 driver.go:358] Setting default libvirt URI to qemu:///system
	W0511 22:52:23.826066    7472 start.go:709] api.Load failed for download-only-20220511225217-7294: filestore "download-only-20220511225217-7294": Docker machine "download-only-20220511225217-7294" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 22:52:23.860189    7472 docker.go:137] docker version: linux-20.10.15
	I0511 22:52:23.860275    7472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:52:23.959043    7472 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-11 22:52:23.88752209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:52:23.959157    7472 docker.go:254] overlay module found
	I0511 22:52:23.961271    7472 out.go:97] Using the docker driver based on existing profile
	I0511 22:52:23.961290    7472 start.go:284] selected driver: docker
	I0511 22:52:23.961295    7472 start.go:801] validating driver "docker" against &{Name:download-only-20220511225217-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220511225217-7294 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:52:23.961523    7472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:52:24.058814    7472 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-11 22:52:23.987126382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:52:24.059323    7472 cni.go:95] Creating CNI manager for ""
	I0511 22:52:24.059338    7472 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:52:24.059346    7472 start_flags.go:306] config:
	{Name:download-only-20220511225217-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220511225217-7294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:52:24.061467    7472 out.go:97] Starting control plane node download-only-20220511225217-7294 in cluster download-only-20220511225217-7294
	I0511 22:52:24.061497    7472 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 22:52:24.063193    7472 out.go:97] Pulling base image ...
	I0511 22:52:24.063222    7472 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 22:52:24.063341    7472 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 22:52:24.096235    7472 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 22:52:24.096261    7472 cache.go:57] Caching tarball of preloaded images
	I0511 22:52:24.096544    7472 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 22:52:24.098520    7472 out.go:97] Downloading Kubernetes v1.23.5 preload ...
	I0511 22:52:24.098540    7472 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:52:24.104989    7472 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 22:52:24.105007    7472 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 22:52:24.105134    7472 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 22:52:24.105156    7472 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory, skipping pull
	I0511 22:52:24.105164    7472 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in cache, skipping pull
	I0511 22:52:24.105180    7472 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a as a tarball
	I0511 22:52:24.128107    7472 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4?checksum=md5:d0fb3d86acaea9a7773bdef3468eac56 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 22:52:27.713171    7472 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:52:27.713261    7472 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:52:28.514471    7472 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0511 22:52:28.514615    7472 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/download-only-20220511225217-7294/config.json ...
	I0511 22:52:28.514825    7472 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 22:52:28.515049    7472 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/linux/amd64/v1.23.5/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511225217-7294"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (4.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220511225217-7294 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220511225217-7294 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.435647962s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (4.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220511225217-7294
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220511225217-7294: exit status 85 (79.824383ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 22:52:29
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 22:52:29.127029    7637 out.go:296] Setting OutFile to fd 1 ...
	I0511 22:52:29.127208    7637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:52:29.127218    7637 out.go:309] Setting ErrFile to fd 2...
	I0511 22:52:29.127223    7637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:52:29.127332    7637 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	W0511 22:52:29.127458    7637 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: no such file or directory
	I0511 22:52:29.127565    7637 out.go:303] Setting JSON to true
	I0511 22:52:29.128303    7637 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2091,"bootTime":1652307458,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 22:52:29.128371    7637 start.go:125] virtualization: kvm guest
	I0511 22:52:29.130970    7637 out.go:97] [download-only-20220511225217-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0511 22:52:29.131128    7637 notify.go:193] Checking for updates...
	I0511 22:52:29.132915    7637 out.go:169] MINIKUBE_LOCATION=13639
	I0511 22:52:29.134469    7637 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 22:52:29.135911    7637 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 22:52:29.137362    7637 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 22:52:29.138854    7637 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511225217-7294"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220511225217-7294
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220511225234-7294 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220511225234-7294 --force --alsologtostderr --driver=docker  --container-runtime=docker: (1.552057862s)
helpers_test.go:175: Cleaning up "download-docker-20220511225234-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220511225234-7294
--- PASS: TestDownloadOnlyKic (2.58s)

                                                
                                    
x
+
TestBinaryMirror (0.89s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220511225236-7294 --alsologtostderr --binary-mirror http://127.0.0.1:42591 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-20220511225236-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220511225236-7294
--- PASS: TestBinaryMirror (0.89s)

                                                
                                    
x
+
TestOffline (100.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220511231427-7294 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220511231427-7294 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m36.884678935s)
helpers_test.go:175: Cleaning up "offline-docker-20220511231427-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220511231427-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220511231427-7294: (3.307714878s)
--- PASS: TestOffline (100.19s)

                                                
                                    
x
+
TestAddons/Setup (94.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220511225237-7294 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220511225237-7294 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m34.727667792s)
--- PASS: TestAddons/Setup (94.73s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 10.129065ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-b9cbd" [1fcc23c0-3d5b-4d3b-85f0-845bee7cc2ed] Running
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012660531s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-xbqbx" [abc48ce1-5229-4173-bda0-25231245f987] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012232761s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220511225237-7294 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220511225237-7294 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:295: (dbg) Done: kubectl --context addons-20220511225237-7294 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.349989414s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220511225237-7294 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Done: kubectl --context addons-20220511225237-7294 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.846994972s)
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220511225237-7294 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:182: (dbg) Non-zero exit: kubectl --context addons-20220511225237-7294 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (181.597858ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.97.223.119:443: connect: connection refused

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220511225237-7294 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220511225237-7294 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [e0fc396f-0e24-407d-935b-4b6cd641f3ab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [e0fc396f-0e24-407d-935b-4b6cd641f3ab] Running
2022/05/11 22:54:25 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008490749s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220511225237-7294 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable ingress-dns --alsologtostderr -v=1: (1.350224971s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable ingress --alsologtostderr -v=1: (7.519619603s)
--- PASS: TestAddons/parallel/Ingress (28.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 8.683172ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-k4gpt" [9e357009-e116-4e29-936f-62959f62a84d] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012791222s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220511225237-7294 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.67s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 7.187843ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-qzzbw" [b5000028-e559-49fc-96c5-17478612f114] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013097562s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220511225237-7294 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220511225237-7294 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.316293811s)
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 6.245927ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220511225237-7294 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220511225237-7294 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220511225237-7294 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [61ccbb06-fbf8-43ef-a656-e3ef82fa0ec2] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [61ccbb06-fbf8-43ef-a656-e3ef82fa0ec2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [61ccbb06-fbf8-43ef-a656-e3ef82fa0ec2] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.016574513s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220511225237-7294 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220511225237-7294 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220511225237-7294 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220511225237-7294 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220511225237-7294 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220511225237-7294 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220511225237-7294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220511225237-7294 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [414e4c90-360a-4edd-8a5b-7bc263c09c3b] Pending
helpers_test.go:342: "task-pv-pod-restore" [414e4c90-360a-4edd-8a5b-7bc263c09c3b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [414e4c90-360a-4edd-8a5b-7bc263c09c3b] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.006487626s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220511225237-7294 delete pod task-pv-pod-restore
addons_test.go:576: (dbg) Done: kubectl --context addons-20220511225237-7294 delete pod task-pv-pod-restore: (1.000278773s)
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220511225237-7294 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220511225237-7294 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.891740138s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (37.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220511225237-7294 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [995c5f83-bb30-4265-810b-344b6d47b93a] Pending
helpers_test.go:342: "busybox" [995c5f83-bb30-4265-810b-344b6d47b93a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [995c5f83-bb30-4265-810b-344b6d47b93a] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 7.008328941s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220511225237-7294 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220511225237-7294 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220511225237-7294 addons disable gcp-auth --alsologtostderr -v=1: (5.807280609s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220511225237-7294 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220511225237-7294 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-nrkjt" [bced448c-84be-4699-bff5-baefc7dd45ff] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-nrkjt" [bced448c-84be-4699-bff5-baefc7dd45ff] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 13.005787115s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220511225237-7294 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-qvzv7" [522dec99-d7c6-48f7-b48c-835926e617d0] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-qvzv7" [522dec99-d7c6-48f7-b48c-835926e617d0] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.006158904s
--- PASS: TestAddons/serial/GCPAuth (37.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220511225237-7294
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220511225237-7294: (10.885656737s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220511225237-7294
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220511225237-7294
--- PASS: TestAddons/StoppedEnableDisable (11.09s)

                                                
                                    
x
+
TestCertOptions (31.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220511231607-7294 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220511231607-7294 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (27.867880679s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220511231607-7294 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220511231607-7294 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220511231607-7294 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220511231607-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220511231607-7294

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220511231607-7294: (3.079757483s)
--- PASS: TestCertOptions (31.85s)

                                                
                                    
x
+
TestCertExpiration (219.07s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220511231618-7294 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220511231618-7294 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (32.0568728s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220511231618-7294 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0511 23:19:51.975640    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220511231618-7294 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.58549906s)
helpers_test.go:175: Cleaning up "cert-expiration-20220511231618-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220511231618-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220511231618-7294: (2.431365096s)
--- PASS: TestCertExpiration (219.07s)

                                                
                                    
x
+
TestDockerFlags (236.64s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220511231637-7294 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220511231637-7294 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (3m53.275106631s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220511231637-7294 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220511231637-7294 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220511231637-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220511231637-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220511231637-7294: (2.534429837s)
--- PASS: TestDockerFlags (236.64s)

                                                
                                    
x
+
TestForceSystemdFlag (46.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220511231427-7294 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220511231427-7294 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.659419334s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220511231427-7294 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:175: Cleaning up "force-systemd-flag-20220511231427-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220511231427-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220511231427-7294: (2.5612396s)
--- PASS: TestForceSystemdFlag (46.77s)

                                                
                                    
x
+
TestForceSystemdEnv (39.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220511231558-7294 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220511231558-7294 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.651938653s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220511231558-7294 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220511231558-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220511231558-7294

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220511231558-7294: (2.906568778s)
--- PASS: TestForceSystemdEnv (39.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.7s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.70s)

                                                
                                    
x
+
TestErrorSpam/setup (24.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220511225550-7294 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220511225550-7294 --driver=docker  --container-runtime=docker
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220511225550-7294 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220511225550-7294 --driver=docker  --container-runtime=docker: (24.582986026s)
--- PASS: TestErrorSpam/setup (24.58s)

                                                
                                    
x
+
TestErrorSpam/start (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 start --dry-run
--- PASS: TestErrorSpam/start (1.02s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (10.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 stop: (10.712224488s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220511225550-7294 --log_dir /tmp/nospam-20220511225550-7294 stop
--- PASS: TestErrorSpam/stop (10.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1784: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/test/nested/copy/7294/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2163: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220511225632-7294 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2163: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220511225632-7294 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (41.66360574s)
--- PASS: TestFunctional/serial/StartWithProxy (41.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220511225632-7294 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220511225632-7294 --alsologtostderr -v=8: (5.425773603s)
functional_test.go:658: soft start took 5.426449407s for "functional-20220511225632-7294" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-20220511225632-7294 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 cache add k8s.gcr.io/pause:3.3: (1.043426814s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220511225632-7294 /tmp/TestFunctionalserialCacheCmdcacheadd_local2387457036/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cache add minikube-local-cache-test:functional-20220511225632-7294
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cache delete minikube-local-cache-test:functional-20220511225632-7294
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220511225632-7294
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (370.000229ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 kubectl -- --context functional-20220511225632-7294 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.24s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-20220511225632-7294 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (23.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220511225632-7294 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220511225632-7294 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (23.843971315s)
functional_test.go:756: restart took 23.844080358s for "functional-20220511225632-7294" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (23.84s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 logs: (1.333771684s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 logs --file /tmp/TestFunctionalserialLogsFileCmd3104074089/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 logs --file /tmp/TestFunctionalserialLogsFileCmd3104074089/001/logs.txt: (1.369833077s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220511225632-7294 config get cpus: exit status 14 (82.867227ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220511225632-7294 config get cpus: exit status 14 (77.577796ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220511225632-7294 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220511225632-7294 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 43700: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220511225632-7294 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220511225632-7294 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (1.180088517s)

                                                
                                                
-- stdout --
	* [functional-20220511225632-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 22:58:02.254853   43096 out.go:296] Setting OutFile to fd 1 ...
	I0511 22:58:02.255091   43096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:58:02.255103   43096 out.go:309] Setting ErrFile to fd 2...
	I0511 22:58:02.255108   43096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:58:02.255257   43096 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 22:58:02.255605   43096 out.go:303] Setting JSON to false
	I0511 22:58:02.257325   43096 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2424,"bootTime":1652307458,"procs":538,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 22:58:02.257420   43096 start.go:125] virtualization: kvm guest
	I0511 22:58:02.410057   43096 out.go:177] * [functional-20220511225632-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0511 22:58:02.537320   43096 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 22:58:02.617378   43096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 22:58:02.695103   43096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 22:58:02.708276   43096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 22:58:02.721144   43096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0511 22:58:02.734240   43096 config.go:178] Loaded profile config "functional-20220511225632-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 22:58:02.734674   43096 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 22:58:02.773172   43096 docker.go:137] docker version: linux-20.10.15
	I0511 22:58:02.773303   43096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:58:02.883004   43096 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:64 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:42 SystemTime:2022-05-11 22:58:02.804479407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:58:02.883108   43096 docker.go:254] overlay module found
	I0511 22:58:03.046596   43096 out.go:177] * Using the docker driver based on existing profile
	I0511 22:58:03.110415   43096 start.go:284] selected driver: docker
	I0511 22:58:03.110469   43096 start.go:801] validating driver "docker" against &{Name:functional-20220511225632-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511225632-7294 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:58:03.110650   43096 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 22:58:03.179881   43096 out.go:177] 
	W0511 22:58:03.217219   43096 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0511 22:58:03.226271   43096 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220511225632-7294 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220511225632-7294 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220511225632-7294 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (356.946438ms)

                                                
                                                
-- stdout --
	* [functional-20220511225632-7294] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 22:57:58.362309   41452 out.go:296] Setting OutFile to fd 1 ...
	I0511 22:57:58.362501   41452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:57:58.362516   41452 out.go:309] Setting ErrFile to fd 2...
	I0511 22:57:58.362524   41452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:57:58.362754   41452 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 22:57:58.363076   41452 out.go:303] Setting JSON to false
	I0511 22:57:58.364839   41452 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2420,"bootTime":1652307458,"procs":536,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0511 22:57:58.364921   41452 start.go:125] virtualization: kvm guest
	I0511 22:57:58.368118   41452 out.go:177] * [functional-20220511225632-7294] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	I0511 22:57:58.370241   41452 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 22:57:58.371755   41452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 22:57:58.374230   41452 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 22:57:58.375811   41452 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 22:57:58.378244   41452 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0511 22:57:58.380226   41452 config.go:178] Loaded profile config "functional-20220511225632-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 22:57:58.380814   41452 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 22:57:58.442420   41452 docker.go:137] docker version: linux-20.10.15
	I0511 22:57:58.442568   41452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:57:58.614009   41452 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:64 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:41 SystemTime:2022-05-11 22:57:58.49033249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:57:58.614160   41452 docker.go:254] overlay module found
	I0511 22:57:58.616996   41452 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0511 22:57:58.618464   41452 start.go:284] selected driver: docker
	I0511 22:57:58.618494   41452 start.go:801] validating driver "docker" against &{Name:functional-20220511225632-7294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511225632-7294 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:57:58.618675   41452 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 22:57:58.621545   41452 out.go:177] 
	W0511 22:57:58.622941   41452 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0511 22:57:58.624143   41452 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run:  kubectl --context functional-20220511225632-7294 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-20220511225632-7294 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-fhgk5" [a53018e4-2423-4911-a68c-cc2c7ed2b73c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-fhgk5" [a53018e4-2423-4911-a68c-cc2c7ed2b73c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.008028618s
functional_test.go:1451: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 service list
functional_test.go:1451: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 service list: (1.771588458s)
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 service --namespace=default --https --url hello-node
functional_test.go:1478: found endpoint: https://192.168.49.2:32678
functional_test.go:1493: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 service hello-node --url --format={{.IP}}
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 service hello-node --url
functional_test.go:1513: found endpoint for hello-node: http://192.168.49.2:32678
--- PASS: TestFunctional/parallel/ServiceCmd (13.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1561: (dbg) Run:  kubectl --context functional-20220511225632-7294 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1567: (dbg) Run:  kubectl --context functional-20220511225632-7294 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-kpjlf" [d7f46958-9c6b-4a1f-ad19-943ef15c2202] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-kpjlf" [d7f46958-9c6b-4a1f-ad19-943ef15c2202] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005849366s
functional_test.go:1581: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 service hello-node-connect --url
functional_test.go:1587: found endpoint for hello-node-connect: http://192.168.49.2:31771
functional_test.go:1607: http://192.168.49.2:31771: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-kpjlf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31771
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 addons list
functional_test.go:1634: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [13d6b36a-da63-427d-9a0c-67cc25dc9131] Running
2022/05/11 22:58:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007305987s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220511225632-7294 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220511225632-7294 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220511225632-7294 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220511225632-7294 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [ba5487e7-5c21-4785-861a-e6b4dfabacde] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [ba5487e7-5c21-4785-861a-e6b4dfabacde] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [ba5487e7-5c21-4785-861a-e6b4dfabacde] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.006884591s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220511225632-7294 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220511225632-7294 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220511225632-7294 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [8c4e51c0-3d6d-446c-b0d9-5e05fde586a4] Pending
helpers_test.go:342: "sp-pod" [8c4e51c0-3d6d-446c-b0d9-5e05fde586a4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006195527s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220511225632-7294 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1674: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh -n functional-20220511225632-7294 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 cp functional-20220511225632-7294:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd446137663/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh -n functional-20220511225632-7294 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1722: (dbg) Run:  kubectl --context functional-20220511225632-7294 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-b87c45988-j7sg7" [75239971-5115-4f61-a044-7a879d89faef] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-j7sg7" [75239971-5115-4f61-a044-7a879d89faef] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-j7sg7" [75239971-5115-4f61-a044-7a879d89faef] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.045887489s
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;": exit status 1 (241.609822ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;": exit status 1 (343.271147ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;": exit status 1 (394.995363ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;": exit status 1 (125.901368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511225632-7294 exec mysql-b87c45988-j7sg7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: Checking for existence of /etc/test/nested/copy/7294/hosts within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo cat /etc/test/nested/copy/7294/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1865: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/7294.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo cat /etc/ssl/certs/7294.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /usr/share/ca-certificates/7294.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo cat /usr/share/ca-certificates/7294.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /etc/ssl/certs/72942.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo cat /etc/ssl/certs/72942.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /usr/share/ca-certificates/72942.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo cat /usr/share/ca-certificates/72942.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220511225632-7294 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo systemctl is-active crio": exit status 1 (448.37753ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 version -o=json --components: (1.810631746s)
--- PASS: TestFunctional/parallel/Version/components (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220511225632-7294
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220511225632-7294
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/nginx                     | alpine                         | 51696c87e77e4 | 23.4MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>                         | 7801cfc6d5c07 | 34.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-20220511225632-7294 | 7b9ea0ea3c7bc | 1.24MB |
| docker.io/library/nginx                     | latest                         | 7425d3a7c478e | 142MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                        | 3fc1d62d65872 | 135MB  |
| gcr.io/k8s-minikube/busybox                 | latest                         | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| docker.io/library/mysql                     | 5.7                            | a3d35804fa376 | 462MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                        | b0c9e5e4dbb14 | 125MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                        | 884d49d6d8c9f | 53.5MB |
| docker.io/kubernetesui/dashboard            | <none>                         | 7fff914c4a615 | 243MB  |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220511225632-7294 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-20220511225632-7294 | 488f46086bd33 | 30B    |
| k8s.gcr.io/kube-proxy                       | v1.23.5                        | 3c53fa8541f95 | 112MB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format json:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"7b9ea0ea3c7bce0037f1b1c523449e6f6d2127aa95bf49f733b33dc2e460b3ed","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220511225632-7294"],"size":"1240000"},{"id":"3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"135000000"},{"id":"7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"243000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"488f46086bd3393d2570
e251853173aa942ad88ee3924a649442776b17afa7e2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220511225632-7294"],"size":"30"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"7425d3a7c478efbeb75f0937060117343a9a510f72f5f7ad9f14b1501a36940c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docke
r.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"34400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220511225632-7294"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":
"53500000"},{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"125000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls --format yaml:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 488f46086bd3393d2570e251853173aa942ad88ee3924a649442776b17afa7e2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220511225632-7294
size: "30"
- id: 51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "243000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "34400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220511225632-7294
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh pgrep buildkitd: exit status 1 (427.54663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image build -t localhost/my-image:functional-20220511225632-7294 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 image build -t localhost/my-image:functional-20220511225632-7294 testdata/build: (2.08001374s)
functional_test.go:315: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220511225632-7294 image build -t localhost/my-image:functional-20220511225632-7294 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 677c4c17fda9
Removing intermediate container 677c4c17fda9
---> 3f19ec44de80
Step 3/3 : ADD content.txt /
---> 7b9ea0ea3c7b
Successfully built 7b9ea0ea3c7b
Successfully tagged localhost/my-image:functional-20220511225632-7294
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220511225632-7294
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1313: Took "453.187666ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "88.395869ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1364: Took "522.20738ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "83.99141ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294: (4.974347645s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220511225632-7294 docker-env) && out/minikube-linux-amd64 status -p functional-20220511225632-7294"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220511225632-7294 docker-env) && out/minikube-linux-amd64 status -p functional-20220511225632-7294": (1.338464554s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220511225632-7294 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220511225632-7294 /tmp/TestFunctionalparallelMountCmdany-port391572582/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1652309878772122032" to /tmp/TestFunctionalparallelMountCmdany-port391572582/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1652309878772122032" to /tmp/TestFunctionalparallelMountCmdany-port391572582/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1652309878772122032" to /tmp/TestFunctionalparallelMountCmdany-port391572582/001/test-1652309878772122032
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (501.962981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 11 22:57 created-by-test
-rw-r--r-- 1 docker docker 24 May 11 22:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 11 22:57 test-1652309878772122032
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh cat /mount-9p/test-1652309878772122032

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220511225632-7294 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [37323179-f1e3-428c-92d6-3a46979028de] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [37323179-f1e3-428c-92d6-3a46979028de] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [37323179-f1e3-428c-92d6-3a46979028de] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.00812476s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220511225632-7294 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220511225632-7294 /tmp/TestFunctionalparallelMountCmdany-port391572582/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294: (3.87211684s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220511225632-7294
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294: (4.754534469s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image save gcr.io/google-containers/addon-resizer:functional-20220511225632-7294 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 image save gcr.io/google-containers/addon-resizer:functional-20220511225632-7294 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.301973434s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220511225632-7294 /tmp/TestFunctionalparallelMountCmdspecific-port652872821/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220511225632-7294 /tmp/TestFunctionalparallelMountCmdspecific-port652872821/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220511225632-7294 /tmp/TestFunctionalparallelMountCmdspecific-port652872821/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220511225632-7294 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220511225632-7294 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [d82c0e5e-2ece-4c94-903c-6e7eb32ccd13] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [d82c0e5e-2ece-4c94-903c-6e7eb32ccd13] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [d82c0e5e-2ece-4c94-903c-6e7eb32ccd13] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.073933576s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image rm gcr.io/google-containers/addon-resizer:functional-20220511225632-7294

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.563482537s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220511225632-7294
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220511225632-7294 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220511225632-7294 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220511225632-7294: (3.380119074s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220511225632-7294
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511225632-7294 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.104.1.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220511225632-7294 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220511225632-7294
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220511225632-7294
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220511225632-7294
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (51.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220511225849-7294 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0511 22:59:12.539233    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:12.545136    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:12.555630    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:12.575923    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:12.616202    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:12.696547    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:12.856978    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:13.177541    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:13.818446    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:15.099310    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:17.659905    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:22.780874    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 22:59:33.021946    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220511225849-7294 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (51.299423476s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (51.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 addons enable ingress --alsologtostderr -v=5: (11.168086025s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.17s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511225849-7294 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0511 22:59:53.502237    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220511225849-7294 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.100870636s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511225849-7294 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511225849-7294 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [f916b248-fff3-4f95-9df0-f42771587a22] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [f916b248-fff3-4f95-9df0-f42771587a22] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.005195828s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511225849-7294 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 addons disable ingress-dns --alsologtostderr -v=1: (2.369455523s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 addons disable ingress --alsologtostderr -v=1
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220511225849-7294 addons disable ingress --alsologtostderr -v=1: (7.277985557s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.15s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220511230029-7294 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0511 23:00:34.463024    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220511230029-7294 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (39.898807728s)
--- PASS: TestJSONOutput/start/Command (39.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220511230029-7294 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220511230029-7294 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220511230029-7294 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220511230029-7294 --output=json --user=testUser: (10.863688578s)
--- PASS: TestJSONOutput/stop/Command (10.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.31s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220511230123-7294 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220511230123-7294 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.191741ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"98ae24ff-7683-427a-93ae-be15e1a3c188","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220511230123-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bbc1a0d-eadf-4361-8557-544a26e00b1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13639"}}
	{"specversion":"1.0","id":"9cd8859a-0214-4a61-8b3c-828508084836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9fdbb284-c83c-416e-ab1c-9375129766a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig"}}
	{"specversion":"1.0","id":"dfef7778-659a-4109-a790-e018b8da3ad0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube"}}
	{"specversion":"1.0","id":"e950a536-142c-43f8-94ca-7201a352544d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a9856440-029b-4841-ba8f-bd2f3aa39bcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220511230123-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220511230123-7294
--- PASS: TestErrorJSONOutput (0.31s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220511230123-7294 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220511230123-7294 --network=: (24.672041094s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220511230123-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220511230123-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220511230123-7294: (2.22401821s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220511230150-7294 --network=bridge
E0511 23:01:56.385736    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220511230150-7294 --network=bridge: (24.12565298s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220511230150-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220511230150-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220511230150-7294: (2.070534461s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.23s)

                                                
                                    
x
+
TestKicExistingNetwork (27.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220511230217-7294 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220511230217-7294 --network=existing-network: (24.521960778s)
helpers_test.go:175: Cleaning up "existing-network-20220511230217-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220511230217-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220511230217-7294: (2.284978866s)
--- PASS: TestKicExistingNetwork (27.03s)

                                                
                                    
x
+
TestKicCustomSubnet (26.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220511230244-7294 --subnet=192.168.60.0/24
E0511 23:02:56.714073    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:56.719379    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:56.729651    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:56.749944    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:56.790259    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:56.870615    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:57.031056    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:57.351709    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:57.992601    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:02:59.273202    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:03:01.834236    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:03:06.954550    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220511230244-7294 --subnet=192.168.60.0/24: (24.474852224s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220511230244-7294 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220511230244-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220511230244-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220511230244-7294: (2.248411102s)
--- PASS: TestKicCustomSubnet (26.76s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220511230310-7294 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220511230310-7294 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.594823311s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220511230310-7294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220511230310-7294 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0511 23:03:17.195312    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220511230310-7294 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.594021368s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220511230310-7294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220511230310-7294 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220511230310-7294 --alsologtostderr -v=5: (1.741600481s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220511230310-7294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220511230310-7294
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220511230310-7294: (1.271574238s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220511230310-7294
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220511230310-7294: (5.576839424s)
--- PASS: TestMountStart/serial/RestartStopped (6.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220511230310-7294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220511230335-7294 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0511 23:03:37.675541    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:04:12.537795    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 23:04:18.636318    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:04:40.225978    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220511230335-7294 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m11.354513905s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- rollout status deployment/busybox: (2.260673185s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tb5x5 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tvxcf -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tb5x5 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tvxcf -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tb5x5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tvxcf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tb5x5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tb5x5 -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tvxcf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220511230335-7294 -- exec busybox-7978565885-tvxcf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220511230335-7294 -v 3 --alsologtostderr
E0511 23:04:51.975855    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:51.981147    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:51.992227    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:52.012468    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:52.052747    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:52.133839    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:52.294663    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:52.615317    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:53.255941    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:54.536502    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:04:57.097439    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:05:02.218107    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:05:12.458485    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220511230335-7294 -v 3 --alsologtostderr: (25.62472073s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.39s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp testdata/cp-test.txt multinode-20220511230335-7294:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1495492664/001/cp-test_multinode-20220511230335-7294.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294:/home/docker/cp-test.txt multinode-20220511230335-7294-m02:/home/docker/cp-test_multinode-20220511230335-7294_multinode-20220511230335-7294-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m02 "sudo cat /home/docker/cp-test_multinode-20220511230335-7294_multinode-20220511230335-7294-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294:/home/docker/cp-test.txt multinode-20220511230335-7294-m03:/home/docker/cp-test_multinode-20220511230335-7294_multinode-20220511230335-7294-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m03 "sudo cat /home/docker/cp-test_multinode-20220511230335-7294_multinode-20220511230335-7294-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp testdata/cp-test.txt multinode-20220511230335-7294-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1495492664/001/cp-test_multinode-20220511230335-7294-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294-m02:/home/docker/cp-test.txt multinode-20220511230335-7294:/home/docker/cp-test_multinode-20220511230335-7294-m02_multinode-20220511230335-7294.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294 "sudo cat /home/docker/cp-test_multinode-20220511230335-7294-m02_multinode-20220511230335-7294.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294-m02:/home/docker/cp-test.txt multinode-20220511230335-7294-m03:/home/docker/cp-test_multinode-20220511230335-7294-m02_multinode-20220511230335-7294-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m03 "sudo cat /home/docker/cp-test_multinode-20220511230335-7294-m02_multinode-20220511230335-7294-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp testdata/cp-test.txt multinode-20220511230335-7294-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1495492664/001/cp-test_multinode-20220511230335-7294-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294-m03:/home/docker/cp-test.txt multinode-20220511230335-7294:/home/docker/cp-test_multinode-20220511230335-7294-m03_multinode-20220511230335-7294.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294 "sudo cat /home/docker/cp-test_multinode-20220511230335-7294-m03_multinode-20220511230335-7294.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 cp multinode-20220511230335-7294-m03:/home/docker/cp-test.txt multinode-20220511230335-7294-m02:/home/docker/cp-test_multinode-20220511230335-7294-m03_multinode-20220511230335-7294-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 ssh -n multinode-20220511230335-7294-m02 "sudo cat /home/docker/cp-test_multinode-20220511230335-7294-m03_multinode-20220511230335-7294-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220511230335-7294 node stop m03: (1.282038242s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220511230335-7294 status: exit status 7 (613.481543ms)

                                                
                                                
-- stdout --
	multinode-20220511230335-7294
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220511230335-7294-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220511230335-7294-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr
E0511 23:05:32.939459    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr: exit status 7 (611.779347ms)

                                                
                                                
-- stdout --
	multinode-20220511230335-7294
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220511230335-7294-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220511230335-7294-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 23:05:32.720484   99968 out.go:296] Setting OutFile to fd 1 ...
	I0511 23:05:32.720608   99968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:05:32.720618   99968 out.go:309] Setting ErrFile to fd 2...
	I0511 23:05:32.720623   99968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:05:32.720720   99968 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 23:05:32.720883   99968 out.go:303] Setting JSON to false
	I0511 23:05:32.720902   99968 mustload.go:65] Loading cluster: multinode-20220511230335-7294
	I0511 23:05:32.721194   99968 config.go:178] Loaded profile config "multinode-20220511230335-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:05:32.721209   99968 status.go:253] checking status of multinode-20220511230335-7294 ...
	I0511 23:05:32.721570   99968 cli_runner.go:164] Run: docker container inspect multinode-20220511230335-7294 --format={{.State.Status}}
	I0511 23:05:32.753992   99968 status.go:328] multinode-20220511230335-7294 host status = "Running" (err=<nil>)
	I0511 23:05:32.754018   99968 host.go:66] Checking if "multinode-20220511230335-7294" exists ...
	I0511 23:05:32.754313   99968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220511230335-7294
	I0511 23:05:32.786731   99968 host.go:66] Checking if "multinode-20220511230335-7294" exists ...
	I0511 23:05:32.787063   99968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 23:05:32.787127   99968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220511230335-7294
	I0511 23:05:32.819338   99968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/multinode-20220511230335-7294/id_rsa Username:docker}
	I0511 23:05:32.903124   99968 ssh_runner.go:195] Run: systemctl --version
	I0511 23:05:32.906994   99968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 23:05:32.916180   99968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:05:33.021442   99968 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-11 23:05:32.946393707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:05:33.022009   99968 kubeconfig.go:92] found "multinode-20220511230335-7294" server: "https://192.168.49.2:8443"
	I0511 23:05:33.022036   99968 api_server.go:165] Checking apiserver status ...
	I0511 23:05:33.022075   99968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0511 23:05:33.031653   99968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1689/cgroup
	I0511 23:05:33.039090   99968 api_server.go:181] apiserver freezer: "8:freezer:/docker/8125184fda078375662613e384f50b33e4480fbb5e1f40fc77ce30414ba8a952/kubepods/burstable/pod80721e1768bfa0609bf4f0b6f82398bf/b463ab09efaca124a92e2f11154117c53b7a4fab176187eefd6bab7f3793eb06"
	I0511 23:05:33.039141   99968 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8125184fda078375662613e384f50b33e4480fbb5e1f40fc77ce30414ba8a952/kubepods/burstable/pod80721e1768bfa0609bf4f0b6f82398bf/b463ab09efaca124a92e2f11154117c53b7a4fab176187eefd6bab7f3793eb06/freezer.state
	I0511 23:05:33.045602   99968 api_server.go:203] freezer state: "THAWED"
	I0511 23:05:33.045640   99968 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0511 23:05:33.050309   99968 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0511 23:05:33.050335   99968 status.go:419] multinode-20220511230335-7294 apiserver status = Running (err=<nil>)
	I0511 23:05:33.050346   99968 status.go:255] multinode-20220511230335-7294 status: &{Name:multinode-20220511230335-7294 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0511 23:05:33.050367   99968 status.go:253] checking status of multinode-20220511230335-7294-m02 ...
	I0511 23:05:33.050660   99968 cli_runner.go:164] Run: docker container inspect multinode-20220511230335-7294-m02 --format={{.State.Status}}
	I0511 23:05:33.082990   99968 status.go:328] multinode-20220511230335-7294-m02 host status = "Running" (err=<nil>)
	I0511 23:05:33.083017   99968 host.go:66] Checking if "multinode-20220511230335-7294-m02" exists ...
	I0511 23:05:33.083261   99968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220511230335-7294-m02
	I0511 23:05:33.116564   99968 host.go:66] Checking if "multinode-20220511230335-7294-m02" exists ...
	I0511 23:05:33.116837   99968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 23:05:33.116877   99968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220511230335-7294-m02
	I0511 23:05:33.148077   99968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/multinode-20220511230335-7294-m02/id_rsa Username:docker}
	I0511 23:05:33.226744   99968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 23:05:33.235771   99968 status.go:255] multinode-20220511230335-7294-m02 status: &{Name:multinode-20220511230335-7294-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0511 23:05:33.235810   99968 status.go:253] checking status of multinode-20220511230335-7294-m03 ...
	I0511 23:05:33.236090   99968 cli_runner.go:164] Run: docker container inspect multinode-20220511230335-7294-m03 --format={{.State.Status}}
	I0511 23:05:33.268380   99968 status.go:328] multinode-20220511230335-7294-m03 host status = "Stopped" (err=<nil>)
	I0511 23:05:33.268408   99968 status.go:341] host is not running, skipping remaining checks
	I0511 23:05:33.268416   99968 status.go:255] multinode-20220511230335-7294-m03 status: &{Name:multinode-20220511230335-7294-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 node start m03 --alsologtostderr
E0511 23:05:40.556694    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220511230335-7294 node start m03 --alsologtostderr: (24.340586743s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220511230335-7294
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220511230335-7294
E0511 23:06:13.900508    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220511230335-7294: (22.690281453s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220511230335-7294 --wait=true -v=8 --alsologtostderr
E0511 23:07:35.821354    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220511230335-7294 --wait=true -v=8 --alsologtostderr: (1m22.745452502s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220511230335-7294
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220511230335-7294 node delete m03: (4.583915805s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 stop
E0511 23:07:56.715326    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220511230335-7294 stop: (21.518591719s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220511230335-7294 status: exit status 7 (130.230818ms)

                                                
                                                
-- stdout --
	multinode-20220511230335-7294
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220511230335-7294-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr: exit status 7 (131.033813ms)

                                                
                                                
-- stdout --
	multinode-20220511230335-7294
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220511230335-7294-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 23:08:11.054374  114328 out.go:296] Setting OutFile to fd 1 ...
	I0511 23:08:11.054550  114328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:08:11.054560  114328 out.go:309] Setting ErrFile to fd 2...
	I0511 23:08:11.054566  114328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:08:11.054677  114328 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 23:08:11.054842  114328 out.go:303] Setting JSON to false
	I0511 23:08:11.054859  114328 mustload.go:65] Loading cluster: multinode-20220511230335-7294
	I0511 23:08:11.055205  114328 config.go:178] Loaded profile config "multinode-20220511230335-7294": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:08:11.055222  114328 status.go:253] checking status of multinode-20220511230335-7294 ...
	I0511 23:08:11.055564  114328 cli_runner.go:164] Run: docker container inspect multinode-20220511230335-7294 --format={{.State.Status}}
	I0511 23:08:11.087374  114328 status.go:328] multinode-20220511230335-7294 host status = "Stopped" (err=<nil>)
	I0511 23:08:11.087400  114328 status.go:341] host is not running, skipping remaining checks
	I0511 23:08:11.087407  114328 status.go:255] multinode-20220511230335-7294 status: &{Name:multinode-20220511230335-7294 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0511 23:08:11.087439  114328 status.go:253] checking status of multinode-20220511230335-7294-m02 ...
	I0511 23:08:11.087694  114328 cli_runner.go:164] Run: docker container inspect multinode-20220511230335-7294-m02 --format={{.State.Status}}
	I0511 23:08:11.120132  114328 status.go:328] multinode-20220511230335-7294-m02 host status = "Stopped" (err=<nil>)
	I0511 23:08:11.120158  114328 status.go:341] host is not running, skipping remaining checks
	I0511 23:08:11.120165  114328 status.go:255] multinode-20220511230335-7294-m02 status: &{Name:multinode-20220511230335-7294-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220511230335-7294 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0511 23:08:24.397056    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220511230335-7294 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (59.132393629s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220511230335-7294 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220511230335-7294
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220511230335-7294-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220511230335-7294-m02 --driver=docker  --container-runtime=docker: exit status 14 (83.006033ms)

                                                
                                                
-- stdout --
	* [multinode-20220511230335-7294-m02] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220511230335-7294-m02' is duplicated with machine name 'multinode-20220511230335-7294-m02' in profile 'multinode-20220511230335-7294'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220511230335-7294-m03 --driver=docker  --container-runtime=docker
E0511 23:09:12.537938    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220511230335-7294-m03 --driver=docker  --container-runtime=docker: (24.9199168s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220511230335-7294
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220511230335-7294: exit status 80 (369.605541ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220511230335-7294
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220511230335-7294-m03 already exists in multinode-20220511230335-7294-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220511230335-7294-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220511230335-7294-m03: (2.318663507s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.76s)

                                                
                                    
x
+
TestPreload (117.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220511230943-7294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0511 23:09:51.976149    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:10:19.662337    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220511230943-7294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m19.530808087s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220511230943-7294 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220511230943-7294 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220511230943-7294 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (34.677423242s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220511230943-7294 -- docker images
helpers_test.go:175: Cleaning up "test-preload-20220511230943-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220511230943-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220511230943-7294: (2.311060974s)
--- PASS: TestPreload (117.87s)

                                                
                                    
x
+
TestScheduledStopUnix (97.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220511231140-7294 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220511231140-7294 --memory=2048 --driver=docker  --container-runtime=docker: (24.181740886s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220511231140-7294 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220511231140-7294 -n scheduled-stop-20220511231140-7294
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220511231140-7294 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220511231140-7294 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220511231140-7294 -n scheduled-stop-20220511231140-7294
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220511231140-7294
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220511231140-7294 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0511 23:12:56.714933    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220511231140-7294
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220511231140-7294: exit status 7 (99.308531ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220511231140-7294
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220511231140-7294 -n scheduled-stop-20220511231140-7294
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220511231140-7294 -n scheduled-stop-20220511231140-7294: exit status 7 (98.77134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220511231140-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220511231140-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220511231140-7294: (1.842607804s)
--- PASS: TestScheduledStopUnix (97.81s)

                                                
                                    
x
+
TestSkaffold (54.88s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  /tmp/skaffold.exe2075650898 version
skaffold_test.go:60: skaffold version: v1.38.0
skaffold_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220511231318-7294 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220511231318-7294 --memory=2600 --driver=docker  --container-runtime=docker: (25.589061667s)
skaffold_test.go:83: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:107: (dbg) Run:  /tmp/skaffold.exe2075650898 run --minikube-profile skaffold-20220511231318-7294 --kube-context skaffold-20220511231318-7294 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:107: (dbg) Done: /tmp/skaffold.exe2075650898 run --minikube-profile skaffold-20220511231318-7294 --kube-context skaffold-20220511231318-7294 --status-check=true --port-forward=false --interactive=false: (16.238537291s)
skaffold_test.go:113: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-7477ccd4b-llnwx" [a255f969-b58b-43a4-91ef-1cf6e4713f0a] Running
skaffold_test.go:113: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012463733s
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5c8bd96c59-h6pl9" [cb8b0f1c-689c-40ad-86de-340aba02c70a] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006647449s
helpers_test.go:175: Cleaning up "skaffold-20220511231318-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220511231318-7294
E0511 23:14:12.537800    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220511231318-7294: (2.474871996s)
--- PASS: TestSkaffold (54.88s)

                                                
                                    
x
+
TestInsufficientStorage (13.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220511231413-7294 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220511231413-7294 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.818883317s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36bd547e-359d-45e1-9512-cf398c72ee27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220511231413-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"245c9c3b-fec8-4e6b-bffc-c805d1c71ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13639"}}
	{"specversion":"1.0","id":"093266e6-4641-45a8-87b6-64efe1ea0810","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27098332-c622-4d98-89e2-cfbb7f68eda4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig"}}
	{"specversion":"1.0","id":"4e2281bf-b42a-4555-95dc-3a61ca158b11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube"}}
	{"specversion":"1.0","id":"b4a3a936-9c23-49ed-a71c-e4d7cac467fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d1001db5-f0c5-4a4d-b5fc-d6b7873c4b13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fc7c9299-e8ae-4b37-90fe-ea5b32f5e2ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c7956f8a-6431-42b3-9226-f349299a8683","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"279c8fc4-b3d8-4dec-9662-501f8d8668b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"21fe433c-0962-4ce2-8246-74cbac2e4ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220511231413-7294 in cluster insufficient-storage-20220511231413-7294","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c01c9d49-d478-4aa8-b0da-494036b59a77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"891ebf5a-5837-416a-b7a6-3777c4fb0865","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"62dd2d42-451f-4f5b-9664-3585de99545a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220511231413-7294 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220511231413-7294 --output=json --layout=cluster: exit status 7 (368.058181ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220511231413-7294","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220511231413-7294","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0511 23:14:24.826439  146969 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220511231413-7294" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220511231413-7294 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220511231413-7294 --output=json --layout=cluster: exit status 7 (366.423279ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220511231413-7294","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220511231413-7294","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0511 23:14:25.192584  147079 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220511231413-7294" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	E0511 23:14:25.201611  147079 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/insufficient-storage-20220511231413-7294/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220511231413-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220511231413-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220511231413-7294: (1.896456204s)
--- PASS: TestInsufficientStorage (13.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.56283955.exe start -p running-upgrade-20220511231513-7294 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.56283955.exe start -p running-upgrade-20220511231513-7294 --memory=2200 --vm-driver=docker  --container-runtime=docker: (36.040490363s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220511231513-7294 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220511231513-7294 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.485593265s)
helpers_test.go:175: Cleaning up "running-upgrade-20220511231513-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220511231513-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220511231513-7294: (2.336684746s)
--- PASS: TestRunningBinaryUpgrade (64.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (88.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.722972312s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220511231655-7294
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220511231655-7294: (1.297459975s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220511231655-7294 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220511231655-7294 status --format={{.Host}}: exit status 7 (106.775385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.390237195s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220511231655-7294 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (107.11051ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220511231655-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220511231655-7294
	    minikube start -p kubernetes-upgrade-20220511231655-7294 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220511231655-72942 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220511231655-7294 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220511231655-7294 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (13.519869026s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220511231655-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220511231655-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220511231655-7294: (2.699587669s)
--- PASS: TestKubernetesUpgrade (88.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (108.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.743724413.exe start -p missing-upgrade-20220511231639-7294 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.743724413.exe start -p missing-upgrade-20220511231639-7294 --memory=2200 --driver=docker  --container-runtime=docker: (1m7.339355433s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220511231639-7294
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220511231639-7294: (1.707317082s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220511231639-7294
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220511231639-7294 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0511 23:17:56.713741    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220511231639-7294 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.271797888s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220511231639-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220511231639-7294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220511231639-7294: (2.422220205s)
--- PASS: TestMissingContainerUpgrade (108.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (115.411641ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220511231427-7294] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --driver=docker  --container-runtime=docker: (43.084592245s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220511231427-7294 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.3262837583.exe start -p stopped-upgrade-20220511231427-7294 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0511 23:14:51.976305    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.3262837583.exe start -p stopped-upgrade-20220511231427-7294 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.07450378s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.3262837583.exe -p stopped-upgrade-20220511231427-7294 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.3262837583.exe -p stopped-upgrade-20220511231427-7294 stop: (12.275529482s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220511231427-7294 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220511231427-7294 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.193701341s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --no-kubernetes --driver=docker  --container-runtime=docker: (17.00554047s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220511231427-7294 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220511231427-7294 status -o json: exit status 2 (401.330351ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220511231427-7294","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220511231427-7294

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220511231427-7294: (2.298802434s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --no-kubernetes --driver=docker  --container-runtime=docker
E0511 23:15:35.587103    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --no-kubernetes --driver=docker  --container-runtime=docker: (6.03419723s)
--- PASS: TestNoKubernetes/serial/Start (6.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220511231427-7294 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220511231427-7294 "sudo systemctl is-active --quiet service kubelet": exit status 1 (405.17564ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220511231427-7294
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220511231427-7294: (1.323445068s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220511231427-7294 --driver=docker  --container-runtime=docker: (6.164649717s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220511231427-7294 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220511231427-7294 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.28558ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestPause/serial/Start (50.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220511231550-7294 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220511231550-7294 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (50.853843269s)
--- PASS: TestPause/serial/Start (50.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220511231427-7294
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220511231427-7294: (1.776971832s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220511231550-7294 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220511231550-7294 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (6.689722648s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.70s)

                                                
                                    
x
+
TestPause/serial/Pause (1.13s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220511231550-7294 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220511231550-7294 --alsologtostderr -v=5: (1.134445393s)
--- PASS: TestPause/serial/Pause (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220511231550-7294 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220511231550-7294 --output=json --layout=cluster: exit status 2 (579.193341ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220511231550-7294","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220511231550-7294","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.58s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220511231550-7294 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220511231550-7294 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220511231550-7294 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220511231550-7294 --alsologtostderr -v=5: (2.495198216s)
--- PASS: TestPause/serial/DeletePaused (2.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220511231550-7294
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220511231550-7294: exit status 1 (32.961012ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220511231550-7294

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (494.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (8m14.03679764s)
--- PASS: TestNetworkPlugins/group/auto/Start (494.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (46.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker
E0511 23:19:01.146692    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:01.152065    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:01.162369    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:01.182685    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:01.223024    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:01.303317    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:01.463700    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:01.784270    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:02.424490    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:03.704927    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:06.266185    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:11.387306    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:19:12.538096    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p false-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (46.420217594s)
--- PASS: TestNetworkPlugins/group/false/Start (46.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220511231549-7294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220511231549-7294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-pvl4c" [533df0a9-3db2-4a33-b104-0ae453a7dc57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-pvl4c" [533df0a9-3db2-4a33-b104-0ae453a7dc57] Running
E0511 23:19:19.757953    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:19:21.627832    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.00963967s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220511231549-7294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220511231549-7294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220511231549-7294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.137185095s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (84.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker
E0511 23:19:42.108784    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m24.87026035s)
--- PASS: TestNetworkPlugins/group/cilium/Start (84.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (290.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (4m50.09505428s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (290.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-qvg5g" [03d79460-ce9c-4b48-9ca8-f3117cdfd1b4] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014979736s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220511231549-7294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220511231549-7294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-4jtdr" [98d5fc37-d74d-4cdb-a23d-c386a4eff5fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-4jtdr" [98d5fc37-d74d-4cdb-a23d-c386a4eff5fb] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.006962904s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220511231549-7294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220511231549-7294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220511231549-7294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E0511 23:21:44.989268    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220511231549-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (55.991508407s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-72fbw" [3e862203-3733-428f-bb02-780fc7ce91a5] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013549587s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220511231549-7294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220511231549-7294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-b4flg" [af5a7a4f-3b51-4489-a8ad-b5e89b18ef62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-b4flg" [af5a7a4f-3b51-4489-a8ad-b5e89b18ef62] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.007101685s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220511231548-7294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220511231548-7294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-tczjr" [430a389b-d084-4a60-a4ce-e120f4ee60b3] Pending
helpers_test.go:342: "netcat-668db85669-tczjr" [430a389b-d084-4a60-a4ce-e120f4ee60b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-tczjr" [430a389b-d084-4a60-a4ce-e120f4ee60b3] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00658047s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220511231548-7294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220511231548-7294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-b58ds" [7a4845f2-0975-407e-88a1-046e621dcc4e] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-668db85669-b58ds" [7a4845f2-0975-407e-88a1-046e621dcc4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-668db85669-b58ds" [7a4845f2-0975-407e-88a1-046e621dcc4e] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007407918s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (39.88092548s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (39.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220511231548-7294 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (39.968286384s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (39.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220511231548-7294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220511231548-7294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-6hj65" [3a654400-7707-46bb-b104-98dcb47faf99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0511 23:29:01.146956    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-668db85669-6hj65" [3a654400-7707-46bb-b104-98dcb47faf99] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.011525979s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220511231548-7294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220511231548-7294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-ql4gr" [39b73e03-1fb8-4f18-b9a0-ae0ac8b9d9cb] Pending
helpers_test.go:342: "netcat-668db85669-ql4gr" [39b73e03-1fb8-4f18-b9a0-ae0ac8b9d9cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-668db85669-ql4gr" [39b73e03-1fb8-4f18-b9a0-ae0ac8b9d9cb] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.006502197s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (313.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220511233230-7294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0511 23:32:33.027767    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220511233230-7294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m13.451587147s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (313.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220511233503-7294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0511 23:35:24.190663    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:35:24.800486    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:24.805766    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:24.816074    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:24.836383    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:24.876715    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:24.957053    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:25.117965    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:25.438300    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:26.079085    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:27.360870    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:35:29.921212    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220511233503-7294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (54.78032606s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (288.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220511233553-7294 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0511 23:35:55.931606    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220511233553-7294 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (4m48.886662187s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (288.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220511233503-7294 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context no-preload-20220511233503-7294 create -f testdata/busybox.yaml: (1.006311856s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [a9fc2855-89af-4f71-9739-0ef19e9cfc60] Pending
E0511 23:35:59.758873    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
helpers_test.go:342: "busybox" [a9fc2855-89af-4f71-9739-0ef19e9cfc60] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [a9fc2855-89af-4f71-9739-0ef19e9cfc60] Running
E0511 23:36:05.762820    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.088504749s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220511233503-7294 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220511233503-7294 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220511233503-7294 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220511233503-7294 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220511233503-7294 --alsologtostderr -v=3: (11.005519301s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294: exit status 7 (131.858864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220511233503-7294 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (338.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220511233503-7294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0511 23:36:38.983653    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:38.989467    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:39.000026    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:39.020252    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:39.060563    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:39.140889    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:39.301680    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:39.621935    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:40.262108    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:41.542632    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:44.103342    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:46.723676    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:49.224392    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:36:59.464580    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:37:12.546294    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:37:19.945245    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:37:40.230744    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220511233503-7294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (5m37.712349232s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (338.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220511233230-7294 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d15dc26f-955e-4088-ba23-0dc2199aa0a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d15dc26f-955e-4088-ba23-0dc2199aa0a3] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.013615568s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220511233230-7294 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220511233230-7294 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220511233230-7294 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220511233230-7294 --alsologtostderr -v=3
E0511 23:37:55.023339    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:37:56.713681    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
E0511 23:38:00.905951    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220511233230-7294 --alsologtostderr -v=3: (11.016143003s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294: exit status 7 (121.900134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220511233230-7294 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (605.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220511233230-7294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0511 23:38:08.644764    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.027346    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.032628    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.042901    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.063269    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.103588    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.183986    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.344370    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:58.665350    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:38:59.305657    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:00.586411    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:01.147014    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511231318-7294/client.crt: no such file or directory
E0511 23:39:03.147140    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:08.267732    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:12.538213    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511225237-7294/client.crt: no such file or directory
E0511 23:39:14.222161    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory
E0511 23:39:18.508859    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:22.826271    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.180330    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.185645    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.195904    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.216202    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.256656    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.336991    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.497382    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:34.817804    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:35.458508    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:39:36.739619    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220511233230-7294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (10m4.832575813s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (605.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (290.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220511233944-7294 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0511 23:39:51.975772    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
E0511 23:39:54.661182    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:40:15.141812    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:40:19.950893    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory
E0511 23:40:24.800672    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:40:37.266865    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511231549-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220511233944-7294 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (4m50.795262728s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (290.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220511233553-7294 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [273ce3b1-6b8b-437a-a260-4546057fbd12] Pending
helpers_test.go:342: "busybox" [273ce3b1-6b8b-437a-a260-4546057fbd12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [273ce3b1-6b8b-437a-a260-4546057fbd12] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.011835879s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220511233553-7294 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220511233553-7294 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220511233553-7294 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220511233553-7294 --alsologtostderr -v=3
E0511 23:40:52.485511    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:40:55.931799    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:40:56.102039    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220511233553-7294 --alsologtostderr -v=3: (10.911502041s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294: exit status 7 (105.192036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220511233553-7294 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (572.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220511233553-7294 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0511 23:41:38.984050    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:41:41.871357    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511231548-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220511233553-7294 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (9m31.601488361s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (572.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-z5n6f" [6b7157fd-52dc-4e08-b432-c016c9c1abbb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0511 23:42:06.667291    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-8469778f77-z5n6f" [6b7157fd-52dc-4e08-b432-c016c9c1abbb] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.012095463s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-z5n6f" [6b7157fd-52dc-4e08-b432-c016c9c1abbb] Running
E0511 23:42:12.546959    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006687231s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220511233503-7294 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220511233503-7294 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220511233503-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294
E0511 23:42:18.022815    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294: exit status 2 (403.137662ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294: exit status 2 (397.078276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220511233503-7294 --alsologtostderr -v=1
E0511 23:42:18.977270    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220511233503-7294 -n no-preload-20220511233503-7294
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220511234223-7294 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0511 23:42:56.713338    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220511234223-7294 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (38.709109002s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220511234223-7294 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220511234223-7294 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220511234223-7294 --alsologtostderr -v=3: (10.926940001s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294: exit status 7 (106.530872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220511234223-7294 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220511234223-7294 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220511234223-7294 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (20.033843833s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220511234223-7294 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220511234223-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294: exit status 2 (445.469061ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294: exit status 2 (422.686125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220511234223-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220511234223-7294 -n newest-cni-20220511234223-7294
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220511233944-7294 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [8e8bf840-e7ec-4870-914d-d4d7e29baae1] Pending
helpers_test.go:342: "busybox" [8e8bf840-e7ec-4870-914d-d4d7e29baae1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [8e8bf840-e7ec-4870-914d-d4d7e29baae1] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.012090272s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220511233944-7294 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220511233944-7294 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220511233944-7294 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220511233944-7294 --alsologtostderr -v=3
E0511 23:44:51.976065    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220511233944-7294 --alsologtostderr -v=3: (10.830799583s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294: exit status 7 (104.586653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220511233944-7294 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (570.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220511233944-7294 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0511 23:45:01.863963    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:45:24.801202    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511231548-7294/client.crt: no such file or directory
E0511 23:45:55.931685    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511231549-7294/client.crt: no such file or directory
E0511 23:45:59.277417    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:45:59.282733    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:45:59.293018    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:45:59.313411    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:45:59.353729    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:45:59.434350    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:45:59.594775    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:45:59.915285    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:46:00.556314    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:46:01.836976    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:46:04.397837    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:46:09.518591    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:46:19.759330    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:46:38.983833    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511231548-7294/client.crt: no such file or directory
E0511 23:46:40.239952    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:47:12.546521    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511231549-7294/client.crt: no such file or directory
E0511 23:47:21.200918    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511233503-7294/client.crt: no such file or directory
E0511 23:47:56.713674    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511225632-7294/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220511233944-7294 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (9m30.314363794s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (570.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-s9k7q" [d4645c37-87b2-4787-83a7-79fa2c956c2b] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012417565s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-s9k7q" [d4645c37-87b2-4787-83a7-79fa2c956c2b] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005895575s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220511233230-7294 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220511233230-7294 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220511233230-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294: exit status 2 (408.737548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294: exit status 2 (407.712571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220511233230-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220511233230-7294 -n old-k8s-version-20220511233230-7294
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-gwkr8" [62ee4359-a540-4e76-afeb-6954825c61b6] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011835716s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-gwkr8" [62ee4359-a540-4e76-afeb-6954825c61b6] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008040545s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220511233553-7294 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220511233553-7294 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220511233553-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294: exit status 2 (405.866738ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294: exit status 2 (406.027326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220511233553-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220511233553-7294 -n embed-certs-20220511233553-7294
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-dc9z8" [ca7bd332-4c91-4d8c-867d-ca06f72e0ed9] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011955425s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-dc9z8" [ca7bd332-4c91-4d8c-867d-ca06f72e0ed9] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0511 23:54:34.180720    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511231548-7294/client.crt: no such file or directory
E0511 23:54:35.024378    7294 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13639-3547-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511225849-7294/client.crt: no such file or directory
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006792642s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220511233944-7294 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220511233944-7294 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220511233944-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294: exit status 2 (399.402159ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294: exit status 2 (402.534401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220511233944-7294 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220511233944-7294 -n default-k8s-different-port-20220511233944-7294
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.14s)

                                                
                                    

Test skip (21/283)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220511231548-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220511231548-7294
--- SKIP: TestNetworkPlugins/group/flannel (0.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220511233944-7294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220511233944-7294
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard