Test Report: Docker_Linux 13251

                    
                      cce8d1911280cbcb62c9a9805b43d62c56136aef:2022-02-02:22517
                    
                

Test fail (3/292)

Order failed test Duration
75 TestFunctional/serial/ComponentHealth 2.23
259 TestNoKubernetes/serial/StartNoArgs 4.6
345 TestNetworkPlugins/group/calico/Start 524.86
x
+
TestFunctional/serial/ComponentHealth (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220202214710-386638 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:834: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-02-02 21:47:36 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc0010895f0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc000692150} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-apiserver:v1.23.2 ImageID:docker-pullable://k8s.gcr.io/kube-apiserver@sha256:63ede81b7e1fbb51669f4ee461481815f50eeed1f95e48558e3b8c3dace58a0f ContainerID:docker://16f643730c8ff3e229773759904dc292039b4b0cca5a89909b2a0f64c57c469e}]}
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20220202214710-386638
helpers_test.go:236: (dbg) docker inspect functional-20220202214710-386638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25093bd177af799bcd7a944c7034989d33dd21f9b824b67856a8389eb846e95e",
	        "Created": "2022-02-02T21:47:19.785576666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 409225,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-02T21:47:20.121491302Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/25093bd177af799bcd7a944c7034989d33dd21f9b824b67856a8389eb846e95e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25093bd177af799bcd7a944c7034989d33dd21f9b824b67856a8389eb846e95e/hostname",
	        "HostsPath": "/var/lib/docker/containers/25093bd177af799bcd7a944c7034989d33dd21f9b824b67856a8389eb846e95e/hosts",
	        "LogPath": "/var/lib/docker/containers/25093bd177af799bcd7a944c7034989d33dd21f9b824b67856a8389eb846e95e/25093bd177af799bcd7a944c7034989d33dd21f9b824b67856a8389eb846e95e-json.log",
	        "Name": "/functional-20220202214710-386638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220202214710-386638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220202214710-386638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eaf65fc43467a6bb44f91633dbf8c411f563d71ea0df9176fa4f414f0f82276c-init/diff:/var/lib/docker/overlay2/d4663ead96d8fac7b028dde763b7445fdff56593784b7a04a0c4c7450b12ac8a/diff:/var/lib/docker/overlay2/f0c766a8d59c3075c44f5eaf54a88aef49ac3a770e6f1e3ac6ebd4004f5b70e2/diff:/var/lib/docker/overlay2/03f8cecf4339603da26d55f367194130430755e9c21844c70ce3d30bd8d5d776/diff:/var/lib/docker/overlay2/8c56519bb2995287e5231612d5ca3809d3ca82a08d3fb88d4bc3f28acb44c548/diff:/var/lib/docker/overlay2/cfdceedf4766b92de092fc07ad1b8f1f378126b680f71754acd32502d082ac4c/diff:/var/lib/docker/overlay2/243a3de48d24643b038a407d872fc1ebb1cca9719c882859b2e65a71ba051c3d/diff:/var/lib/docker/overlay2/73d9c1c910d6be4b6f1719f9ad777c50baedc16cb66f6167f61c34c3535d6aa8/diff:/var/lib/docker/overlay2/414a2e06f368b9a6893993643cc902550952ea16b431d03ef81177a67bcc6055/diff:/var/lib/docker/overlay2/237cb26dc1fb33617d977694b7036e843ca7076f249a87cedb9719f8bb852369/diff:/var/lib/docker/overlay2/f94a67
39f2f53cb0adbb52fccb7daf09ff60a575e9ef35eedbfae9a115cb0bee/diff:/var/lib/docker/overlay2/1a7b8bc08aeb75e64990bf84e55e245d3ccba13a7248844f2a2b41a179987edd/diff:/var/lib/docker/overlay2/9d6fe9ebc7ebbd17697484e59d73c2e56a57b9efd010b504af3e94f33693a302/diff:/var/lib/docker/overlay2/a6b04596431127c96091ac1a60b24c2efd5dc5925d3a6be2c7c991a40f0fba61/diff:/var/lib/docker/overlay2/ddffede76ffd319874db8f340cf399929a918323513065094964ebc981ccebe6/diff:/var/lib/docker/overlay2/873af33e16ed022cdbff8f367fac5f511da2edbe653c3a4df4b38f17018fde26/diff:/var/lib/docker/overlay2/49ecfae1413a927bd924c5c004499b9af18da6c25beffa6da10506397419e246/diff:/var/lib/docker/overlay2/8663e1a8bea6b4285860191688fcf3d3aa95f958547b7d2918feda19facc72d2/diff:/var/lib/docker/overlay2/96864535f6abf106236521f0aa4d98958c91533ecc34864088813a5d638d7a85/diff:/var/lib/docker/overlay2/3245e931c6f0447b1c6dd192323b06a5580c4cb9c80e63e19c886107effec1a8/diff:/var/lib/docker/overlay2/fbfc10643f3968343b6f304ba573ab22123857f0ac7bbdf796e69cc759ffcb01/diff:/var/lib/d
ocker/overlay2/008c499b0a1d502f449e9408eb9d7f0d3fd1f927c6fed14c2daf9128f2481a2e/diff:/var/lib/docker/overlay2/049cceba63d6fda39cec7c7c348ae0046d3bcfc9a354ef8c20d2cb0da0c6126e/diff:/var/lib/docker/overlay2/7423dec7519f3618cdbd580c816a41a86bffe6f544fe6e6c90b0891ab319effe/diff:/var/lib/docker/overlay2/b78015fe190a7617cff46e43a3b7a90d608036971a3c758aab0d4c814064c775/diff:/var/lib/docker/overlay2/f1c7b371c8afb5d9df1ad0b6bcf5860b7d0931bc04f95f00c2f7dc66076996d6/diff:/var/lib/docker/overlay2/68d4abf197eeabb5c097584a1527cd6993fb2d55b0fac9957ec46f8412efdf06/diff:/var/lib/docker/overlay2/f08b8daa4f1c25becadfdae5150584a3dd3ac3bf46afaa6e101fe8e0823572f4/diff:/var/lib/docker/overlay2/1965ab77a969620854fa7e23a0c745af7766a48e9ec2abacecc3e064d1c8fa6a/diff:/var/lib/docker/overlay2/e7cbe6b577242fb8b973317eaa8ee217a8a9ee355b88362b66d45d718b3b2c4b/diff:/var/lib/docker/overlay2/c59e06d8f5c93ed9cb94a83e137c16f3dcdea80b9dceccba323b6ddc7543de46/diff:/var/lib/docker/overlay2/d2e3ed906400776c06ca0502e30b187ca7e8cafdf00da3a54c16cd3818f
76bbc/diff:/var/lib/docker/overlay2/8751d7f7a388ed73174c9365d765716ea6d4d513683a025fe6e322b37e0ffa17/diff:/var/lib/docker/overlay2/e19c84986e7254f1600c2a35898ef2158b4e5b77f2ce8cdf017c2f326ffc0491/diff:/var/lib/docker/overlay2/3dc4411ebe2379955bd8260b29d8faa36b7e965e38b15b19cc65ad0a63e431f6/diff:/var/lib/docker/overlay2/2cae1638c524a830e44f0cb4b8db0e6063415a57346d1d190e50edea3c78df73/diff:/var/lib/docker/overlay2/9c15e8e15ab0ee2a47827fef4273bd0d4ffc315726879f2f422a01be6116fcb2/diff:/var/lib/docker/overlay2/d39456e34bd05af837a974416337cc6b9f6ea243f25e9033212a340da93d3154/diff:/var/lib/docker/overlay2/c0101867e0d0e0ff5aaf7104e95cb6cab78625c9cd8697d2a4f28fff809159ff/diff:/var/lib/docker/overlay2/f1c53d89ed6960deaee63188b5bffd5f88edaf3546c4312205f3b465f7bca9b5/diff:/var/lib/docker/overlay2/2685ce865e736b98fc7e2e1447bdbd580080188c81a14278cf54b8e8dedbf1d9/diff:/var/lib/docker/overlay2/985637e295ac0794f3d93fd241c0526bb5ac4c727f5680fc30c1ed3dde3598ae/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eaf65fc43467a6bb44f91633dbf8c411f563d71ea0df9176fa4f414f0f82276c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eaf65fc43467a6bb44f91633dbf8c411f563d71ea0df9176fa4f414f0f82276c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eaf65fc43467a6bb44f91633dbf8c411f563d71ea0df9176fa4f414f0f82276c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220202214710-386638",
	                "Source": "/var/lib/docker/volumes/functional-20220202214710-386638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220202214710-386638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220202214710-386638",
	                "name.minikube.sigs.k8s.io": "functional-20220202214710-386638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4522876337a0f7d10e58eac6e758ba0a0ac8e0013dd66fdfbfa52bc414c0570e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49242"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49238"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49240"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49239"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4522876337a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220202214710-386638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "25093bd177af",
	                        "functional-20220202214710-386638"
	                    ],
	                    "NetworkID": "47c670682efd35dcbb7bca5b9ce18a4524cd9be2130ea6e48bfdbaa04a12cca5",
	                    "EndpointID": "c32dd31a6be38be19e1d5e925b737e59c249b4f7b22560ad209c592c2545e015",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20220202214710-386638 -n functional-20220202214710-386638
helpers_test.go:245: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 logs -n 25: (1.255525527s)
helpers_test.go:253: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                   Args                                   |             Profile              |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | nospam-20220202214626-386638                                             | nospam-20220202214626-386638     | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:46:56 UTC | Wed, 02 Feb 2022 21:46:56 UTC |
	|         | --log_dir                                                                |                                  |         |         |                               |                               |
	|         | /tmp/nospam-20220202214626-386638                                        |                                  |         |         |                               |                               |
	|         | unpause                                                                  |                                  |         |         |                               |                               |
	| -p      | nospam-20220202214626-386638                                             | nospam-20220202214626-386638     | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:46:56 UTC | Wed, 02 Feb 2022 21:46:57 UTC |
	|         | --log_dir                                                                |                                  |         |         |                               |                               |
	|         | /tmp/nospam-20220202214626-386638                                        |                                  |         |         |                               |                               |
	|         | unpause                                                                  |                                  |         |         |                               |                               |
	| -p      | nospam-20220202214626-386638                                             | nospam-20220202214626-386638     | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:46:57 UTC | Wed, 02 Feb 2022 21:46:57 UTC |
	|         | --log_dir                                                                |                                  |         |         |                               |                               |
	|         | /tmp/nospam-20220202214626-386638                                        |                                  |         |         |                               |                               |
	|         | unpause                                                                  |                                  |         |         |                               |                               |
	| -p      | nospam-20220202214626-386638                                             | nospam-20220202214626-386638     | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:46:57 UTC | Wed, 02 Feb 2022 21:47:08 UTC |
	|         | --log_dir                                                                |                                  |         |         |                               |                               |
	|         | /tmp/nospam-20220202214626-386638                                        |                                  |         |         |                               |                               |
	|         | stop                                                                     |                                  |         |         |                               |                               |
	| -p      | nospam-20220202214626-386638                                             | nospam-20220202214626-386638     | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:47:08 UTC | Wed, 02 Feb 2022 21:47:08 UTC |
	|         | --log_dir                                                                |                                  |         |         |                               |                               |
	|         | /tmp/nospam-20220202214626-386638                                        |                                  |         |         |                               |                               |
	|         | stop                                                                     |                                  |         |         |                               |                               |
	| -p      | nospam-20220202214626-386638                                             | nospam-20220202214626-386638     | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:47:08 UTC | Wed, 02 Feb 2022 21:47:08 UTC |
	|         | --log_dir                                                                |                                  |         |         |                               |                               |
	|         | /tmp/nospam-20220202214626-386638                                        |                                  |         |         |                               |                               |
	|         | stop                                                                     |                                  |         |         |                               |                               |
	| delete  | -p                                                                       | nospam-20220202214626-386638     | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:47:08 UTC | Wed, 02 Feb 2022 21:47:10 UTC |
	|         | nospam-20220202214626-386638                                             |                                  |         |         |                               |                               |
	| start   | -p                                                                       | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:47:10 UTC | Wed, 02 Feb 2022 21:47:53 UTC |
	|         | functional-20220202214710-386638                                         |                                  |         |         |                               |                               |
	|         | --memory=4000                                                            |                                  |         |         |                               |                               |
	|         | --apiserver-port=8441                                                    |                                  |         |         |                               |                               |
	|         | --wait=all --driver=docker                                               |                                  |         |         |                               |                               |
	|         | --container-runtime=docker                                               |                                  |         |         |                               |                               |
	| start   | -p                                                                       | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:47:53 UTC | Wed, 02 Feb 2022 21:47:58 UTC |
	|         | functional-20220202214710-386638                                         |                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=8                                                   |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:47:58 UTC | Wed, 02 Feb 2022 21:47:59 UTC |
	|         | cache add k8s.gcr.io/pause:3.1                                           |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:47:59 UTC | Wed, 02 Feb 2022 21:48:00 UTC |
	|         | cache add k8s.gcr.io/pause:3.3                                           |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:00 UTC | Wed, 02 Feb 2022 21:48:01 UTC |
	|         | cache add                                                                |                                  |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638 cache add                               | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:01 UTC | Wed, 02 Feb 2022 21:48:02 UTC |
	|         | minikube-local-cache-test:functional-20220202214710-386638               |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638 cache delete                            | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:02 UTC | Wed, 02 Feb 2022 21:48:02 UTC |
	|         | minikube-local-cache-test:functional-20220202214710-386638               |                                  |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.3                                              | minikube                         | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:02 UTC | Wed, 02 Feb 2022 21:48:02 UTC |
	| cache   | list                                                                     | minikube                         | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:03 UTC | Wed, 02 Feb 2022 21:48:03 UTC |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:03 UTC | Wed, 02 Feb 2022 21:48:03 UTC |
	|         | ssh sudo crictl images                                                   |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:03 UTC | Wed, 02 Feb 2022 21:48:03 UTC |
	|         | ssh sudo docker rmi                                                      |                                  |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:04 UTC | Wed, 02 Feb 2022 21:48:04 UTC |
	|         | cache reload                                                             |                                  |         |         |                               |                               |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:04 UTC | Wed, 02 Feb 2022 21:48:05 UTC |
	|         | ssh sudo crictl inspecti                                                 |                                  |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                  |                                  |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                                              | minikube                         | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:05 UTC | Wed, 02 Feb 2022 21:48:05 UTC |
	| cache   | delete k8s.gcr.io/pause:latest                                           | minikube                         | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:05 UTC | Wed, 02 Feb 2022 21:48:05 UTC |
	| -p      | functional-20220202214710-386638                                         | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:05 UTC | Wed, 02 Feb 2022 21:48:05 UTC |
	|         | kubectl -- --context                                                     |                                  |         |         |                               |                               |
	|         | functional-20220202214710-386638                                         |                                  |         |         |                               |                               |
	|         | get pods                                                                 |                                  |         |         |                               |                               |
	| kubectl | --profile=functional-20220202214710-386638                               | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:05 UTC | Wed, 02 Feb 2022 21:48:05 UTC |
	|         | -- --context                                                             |                                  |         |         |                               |                               |
	|         | functional-20220202214710-386638 get pods                                |                                  |         |         |                               |                               |
	| start   | -p functional-20220202214710-386638                                      | functional-20220202214710-386638 | jenkins | v1.25.1 | Wed, 02 Feb 2022 21:48:05 UTC | Wed, 02 Feb 2022 21:48:30 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                                  |         |         |                               |                               |
	|         | --wait=all                                                               |                                  |         |         |                               |                               |
	|---------|--------------------------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 21:48:05
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0202 21:48:05.593079  414960 out.go:297] Setting OutFile to fd 1 ...
	I0202 21:48:05.593139  414960 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:48:05.593143  414960 out.go:310] Setting ErrFile to fd 2...
	I0202 21:48:05.593147  414960 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:48:05.593289  414960 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 21:48:05.593511  414960 out.go:304] Setting JSON to false
	I0202 21:48:05.594732  414960 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":19838,"bootTime":1643818648,"procs":505,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0202 21:48:05.594791  414960 start.go:122] virtualization: kvm guest
	I0202 21:48:05.597348  414960 out.go:176] * [functional-20220202214710-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0202 21:48:05.599046  414960 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 21:48:05.597525  414960 notify.go:174] Checking for updates...
	I0202 21:48:05.600544  414960 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 21:48:05.602000  414960 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 21:48:05.603390  414960 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 21:48:05.604861  414960 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0202 21:48:05.605326  414960 config.go:176] Loaded profile config "functional-20220202214710-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 21:48:05.605390  414960 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 21:48:05.645724  414960 docker.go:132] docker version: linux-20.10.12
	I0202 21:48:05.645837  414960 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 21:48:05.731456  414960 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-02 21:48:05.675684236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 21:48:05.731550  414960 docker.go:237] overlay module found
	I0202 21:48:05.733681  414960 out.go:176] * Using the docker driver based on existing profile
	I0202 21:48:05.733703  414960 start.go:281] selected driver: docker
	I0202 21:48:05.733708  414960 start.go:798] validating driver "docker" against &{Name:functional-20220202214710-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220202214710-386638 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true
storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 21:48:05.733817  414960 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0202 21:48:05.733983  414960 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 21:48:05.818349  414960 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-02 21:48:05.762280792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 21:48:05.819129  414960 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0202 21:48:05.819156  414960 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0202 21:48:05.819176  414960 cni.go:93] Creating CNI manager for ""
	I0202 21:48:05.819184  414960 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 21:48:05.819191  414960 start_flags.go:302] config:
	{Name:functional-20220202214710-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220202214710-386638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true
storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 21:48:05.821797  414960 out.go:176] * Starting control plane node functional-20220202214710-386638 in cluster functional-20220202214710-386638
	I0202 21:48:05.821833  414960 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 21:48:05.823536  414960 out.go:176] * Pulling base image ...
	I0202 21:48:05.823557  414960 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 21:48:05.823584  414960 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 21:48:05.823589  414960 cache.go:57] Caching tarball of preloaded images
	I0202 21:48:05.823644  414960 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 21:48:05.823783  414960 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0202 21:48:05.823792  414960 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0202 21:48:05.823915  414960 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/config.json ...
	I0202 21:48:05.866788  414960 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 21:48:05.866801  414960 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 21:48:05.866816  414960 cache.go:208] Successfully downloaded all kic artifacts
	I0202 21:48:05.866842  414960 start.go:313] acquiring machines lock for functional-20220202214710-386638: {Name:mk465eb77dcfd76fe2db25b4ec9abb51fe719307 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 21:48:05.866925  414960 start.go:317] acquired machines lock for "functional-20220202214710-386638" in 66.807µs
	I0202 21:48:05.866938  414960 start.go:93] Skipping create...Using existing machine configuration
	I0202 21:48:05.866942  414960 fix.go:55] fixHost starting: 
	I0202 21:48:05.867176  414960 cli_runner.go:133] Run: docker container inspect functional-20220202214710-386638 --format={{.State.Status}}
	I0202 21:48:05.896339  414960 fix.go:108] recreateIfNeeded on functional-20220202214710-386638: state=Running err=<nil>
	W0202 21:48:05.896362  414960 fix.go:134] unexpected machine state, will restart: <nil>
	I0202 21:48:05.898777  414960 out.go:176] * Updating the running docker "functional-20220202214710-386638" container ...
	I0202 21:48:05.898805  414960 machine.go:88] provisioning docker machine ...
	I0202 21:48:05.898828  414960 ubuntu.go:169] provisioning hostname "functional-20220202214710-386638"
	I0202 21:48:05.898878  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:05.929902  414960 main.go:130] libmachine: Using SSH client type: native
	I0202 21:48:05.930062  414960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49242 <nil> <nil>}
	I0202 21:48:05.930071  414960 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20220202214710-386638 && echo "functional-20220202214710-386638" | sudo tee /etc/hostname
	I0202 21:48:06.066520  414960 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20220202214710-386638
	
	I0202 21:48:06.066595  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:06.098943  414960 main.go:130] libmachine: Using SSH client type: native
	I0202 21:48:06.099092  414960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49242 <nil> <nil>}
	I0202 21:48:06.099109  414960 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20220202214710-386638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20220202214710-386638/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20220202214710-386638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0202 21:48:06.230122  414960 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0202 21:48:06.230145  414960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
	I0202 21:48:06.230180  414960 ubuntu.go:177] setting up certificates
	I0202 21:48:06.230189  414960 provision.go:83] configureAuth start
	I0202 21:48:06.230232  414960 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220202214710-386638
	I0202 21:48:06.261938  414960 provision.go:138] copyHostCerts
	I0202 21:48:06.261981  414960 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
	I0202 21:48:06.261987  414960 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
	I0202 21:48:06.262041  414960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
	I0202 21:48:06.262124  414960 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
	I0202 21:48:06.262129  414960 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
	I0202 21:48:06.262150  414960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1679 bytes)
	I0202 21:48:06.262200  414960 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
	I0202 21:48:06.262202  414960 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
	I0202 21:48:06.262218  414960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
	I0202 21:48:06.262254  414960 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.functional-20220202214710-386638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20220202214710-386638]
	I0202 21:48:06.412661  414960 provision.go:172] copyRemoteCerts
	I0202 21:48:06.412718  414960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0202 21:48:06.412749  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:06.444476  414960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/functional-20220202214710-386638/id_rsa Username:docker}
	I0202 21:48:06.537500  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0202 21:48:06.554055  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0202 21:48:06.570256  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0202 21:48:06.586340  414960 provision.go:86] duration metric: configureAuth took 356.142367ms
	I0202 21:48:06.586353  414960 ubuntu.go:193] setting minikube options for container-runtime
	I0202 21:48:06.586526  414960 config.go:176] Loaded profile config "functional-20220202214710-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 21:48:06.586562  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:06.618949  414960 main.go:130] libmachine: Using SSH client type: native
	I0202 21:48:06.619097  414960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49242 <nil> <nil>}
	I0202 21:48:06.619103  414960 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0202 21:48:06.750489  414960 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0202 21:48:06.750507  414960 ubuntu.go:71] root file system type: overlay
	I0202 21:48:06.750769  414960 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0202 21:48:06.750830  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:06.783186  414960 main.go:130] libmachine: Using SSH client type: native
	I0202 21:48:06.783333  414960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49242 <nil> <nil>}
	I0202 21:48:06.783419  414960 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0202 21:48:06.923419  414960 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0202 21:48:06.923493  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:06.954770  414960 main.go:130] libmachine: Using SSH client type: native
	I0202 21:48:06.954922  414960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49242 <nil> <nil>}
	I0202 21:48:06.954933  414960 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0202 21:48:07.089662  414960 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0202 21:48:07.089680  414960 machine.go:91] provisioned docker machine in 1.190868649s
	I0202 21:48:07.089689  414960 start.go:267] post-start starting for "functional-20220202214710-386638" (driver="docker")
	I0202 21:48:07.089694  414960 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0202 21:48:07.089737  414960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0202 21:48:07.089765  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:07.121378  414960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/functional-20220202214710-386638/id_rsa Username:docker}
	I0202 21:48:07.213602  414960 ssh_runner.go:195] Run: cat /etc/os-release
	I0202 21:48:07.216117  414960 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0202 21:48:07.216130  414960 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0202 21:48:07.216136  414960 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0202 21:48:07.216146  414960 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0202 21:48:07.216153  414960 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
	I0202 21:48:07.216194  414960 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
	I0202 21:48:07.216248  414960 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem -> 3866382.pem in /etc/ssl/certs
	I0202 21:48:07.216300  414960 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/test/nested/copy/386638/hosts -> hosts in /etc/test/nested/copy/386638
	I0202 21:48:07.216327  414960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/386638
	I0202 21:48:07.222665  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem --> /etc/ssl/certs/3866382.pem (1708 bytes)
	I0202 21:48:07.238804  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/test/nested/copy/386638/hosts --> /etc/test/nested/copy/386638/hosts (40 bytes)
	I0202 21:48:07.255394  414960 start.go:270] post-start completed in 165.695279ms
	I0202 21:48:07.255433  414960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 21:48:07.255460  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:07.286998  414960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/functional-20220202214710-386638/id_rsa Username:docker}
	I0202 21:48:07.378839  414960 fix.go:57] fixHost completed within 1.511887666s
	I0202 21:48:07.378857  414960 start.go:80] releasing machines lock for "functional-20220202214710-386638", held for 1.511925277s
	I0202 21:48:07.378948  414960 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220202214710-386638
	I0202 21:48:07.410807  414960 ssh_runner.go:195] Run: systemctl --version
	I0202 21:48:07.410850  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:07.410875  414960 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0202 21:48:07.410916  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:07.442426  414960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/functional-20220202214710-386638/id_rsa Username:docker}
	I0202 21:48:07.442888  414960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/functional-20220202214710-386638/id_rsa Username:docker}
	I0202 21:48:07.549185  414960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0202 21:48:07.558156  414960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 21:48:07.567527  414960 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0202 21:48:07.567590  414960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0202 21:48:07.576459  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0202 21:48:07.588411  414960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0202 21:48:07.685101  414960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0202 21:48:07.780341  414960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 21:48:07.789211  414960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0202 21:48:07.882057  414960 ssh_runner.go:195] Run: sudo systemctl start docker
	I0202 21:48:07.891086  414960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 21:48:07.929145  414960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 21:48:07.970218  414960 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0202 21:48:07.970299  414960 cli_runner.go:133] Run: docker network inspect functional-20220202214710-386638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0202 21:48:08.000866  414960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0202 21:48:08.006181  414960 out.go:176]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0202 21:48:08.007710  414960 out.go:176]   - kubelet.housekeeping-interval=5m
	I0202 21:48:08.007767  414960 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 21:48:08.007811  414960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 21:48:08.039587  414960 docker.go:606] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220202214710-386638
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0202 21:48:08.039609  414960 docker.go:537] Images already preloaded, skipping extraction
	I0202 21:48:08.039652  414960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 21:48:08.071045  414960 docker.go:606] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220202214710-386638
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0202 21:48:08.071060  414960 cache_images.go:84] Images are preloaded, skipping loading
	I0202 21:48:08.071108  414960 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0202 21:48:08.148485  414960 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0202 21:48:08.148521  414960 cni.go:93] Creating CNI manager for ""
	I0202 21:48:08.148557  414960 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 21:48:08.148566  414960 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0202 21:48:08.148577  414960 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20220202214710-386638 NodeName:functional-20220202214710-386638 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0202 21:48:08.148683  414960 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20220202214710-386638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0202 21:48:08.148742  414960 kubeadm.go:931] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20220202214710-386638 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:functional-20220202214710-386638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0202 21:48:08.148785  414960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0202 21:48:08.155715  414960 binaries.go:44] Found k8s binaries, skipping transfer
	I0202 21:48:08.155761  414960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0202 21:48:08.162029  414960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0202 21:48:08.173687  414960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0202 21:48:08.185172  414960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1904 bytes)
	I0202 21:48:08.197017  414960 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0202 21:48:08.199741  414960 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638 for IP: 192.168.49.2
	I0202 21:48:08.199834  414960 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
	I0202 21:48:08.199880  414960 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
	I0202 21:48:08.199957  414960 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.key
	I0202 21:48:08.200012  414960 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/apiserver.key.dd3b5fb2
	I0202 21:48:08.200051  414960 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/proxy-client.key
	I0202 21:48:08.200166  414960 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/386638.pem (1338 bytes)
	W0202 21:48:08.200196  414960 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/386638_empty.pem, impossibly tiny 0 bytes
	I0202 21:48:08.200203  414960 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1679 bytes)
	I0202 21:48:08.200231  414960 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
	I0202 21:48:08.200258  414960 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
	I0202 21:48:08.200281  414960 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1679 bytes)
	I0202 21:48:08.200321  414960 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem (1708 bytes)
	I0202 21:48:08.201567  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0202 21:48:08.218368  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0202 21:48:08.234058  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0202 21:48:08.250001  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0202 21:48:08.265999  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0202 21:48:08.282337  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0202 21:48:08.298533  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0202 21:48:08.314790  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0202 21:48:08.331486  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/386638.pem --> /usr/share/ca-certificates/386638.pem (1338 bytes)
	I0202 21:48:08.347778  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem --> /usr/share/ca-certificates/3866382.pem (1708 bytes)
	I0202 21:48:08.363730  414960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0202 21:48:08.379636  414960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0202 21:48:08.391040  414960 ssh_runner.go:195] Run: openssl version
	I0202 21:48:08.395393  414960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386638.pem && ln -fs /usr/share/ca-certificates/386638.pem /etc/ssl/certs/386638.pem"
	I0202 21:48:08.402124  414960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386638.pem
	I0202 21:48:08.404805  414960 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  2 21:47 /usr/share/ca-certificates/386638.pem
	I0202 21:48:08.404841  414960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386638.pem
	I0202 21:48:08.409157  414960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386638.pem /etc/ssl/certs/51391683.0"
	I0202 21:48:08.415335  414960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3866382.pem && ln -fs /usr/share/ca-certificates/3866382.pem /etc/ssl/certs/3866382.pem"
	I0202 21:48:08.422200  414960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3866382.pem
	I0202 21:48:08.424890  414960 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  2 21:47 /usr/share/ca-certificates/3866382.pem
	I0202 21:48:08.424929  414960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3866382.pem
	I0202 21:48:08.429506  414960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3866382.pem /etc/ssl/certs/3ec20f2e.0"
	I0202 21:48:08.435628  414960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0202 21:48:08.442233  414960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0202 21:48:08.444977  414960 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  2 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0202 21:48:08.445014  414960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0202 21:48:08.449278  414960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0202 21:48:08.455221  414960 kubeadm.go:390] StartCluster: {Name:functional-20220202214710-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220202214710-386638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 21:48:08.455328  414960 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0202 21:48:08.485233  414960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0202 21:48:08.491749  414960 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0202 21:48:08.491759  414960 kubeadm.go:600] restartCluster start
	I0202 21:48:08.491791  414960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0202 21:48:08.497583  414960 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0202 21:48:08.498282  414960 kubeconfig.go:92] found "functional-20220202214710-386638" server: "https://192.168.49.2:8441"
	I0202 21:48:08.500966  414960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0202 21:48:08.507512  414960 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-02-02 21:47:24.783914847 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-02-02 21:48:08.192349744 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0202 21:48:08.507519  414960 kubeadm.go:1054] stopping kube-system containers ...
	I0202 21:48:08.507552  414960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0202 21:48:08.540256  414960 docker.go:438] Stopping containers: [814d5d39e5e7 721285b959af 57c890a81600 984b0595435d 8d4ef263a0de 4340f72c87f1 69b3a7dc1d7c 01a90a21bd58 a93f4106bdd4 7f57939fd732 66081e6ef128 3935df79b1bf 12dd19e847f6 45922d9a0372 94d0ebbbbe55]
	I0202 21:48:08.540309  414960 ssh_runner.go:195] Run: docker stop 814d5d39e5e7 721285b959af 57c890a81600 984b0595435d 8d4ef263a0de 4340f72c87f1 69b3a7dc1d7c 01a90a21bd58 a93f4106bdd4 7f57939fd732 66081e6ef128 3935df79b1bf 12dd19e847f6 45922d9a0372 94d0ebbbbe55
	I0202 21:48:13.614958  414960 ssh_runner.go:235] Completed: docker stop 814d5d39e5e7 721285b959af 57c890a81600 984b0595435d 8d4ef263a0de 4340f72c87f1 69b3a7dc1d7c 01a90a21bd58 a93f4106bdd4 7f57939fd732 66081e6ef128 3935df79b1bf 12dd19e847f6 45922d9a0372 94d0ebbbbe55: (5.07462099s)
	I0202 21:48:13.615027  414960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0202 21:48:13.709643  414960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0202 21:48:13.716714  414960 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Feb  2 21:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb  2 21:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Feb  2 21:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb  2 21:47 /etc/kubernetes/scheduler.conf
	
	I0202 21:48:13.716758  414960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0202 21:48:13.723177  414960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0202 21:48:13.729796  414960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0202 21:48:13.736210  414960 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0202 21:48:13.736242  414960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0202 21:48:13.741908  414960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0202 21:48:13.748115  414960 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0202 21:48:13.748152  414960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0202 21:48:13.754033  414960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0202 21:48:13.760269  414960 kubeadm.go:677] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0202 21:48:13.760277  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0202 21:48:13.800107  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0202 21:48:15.173564  414960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.373430623s)
	I0202 21:48:15.173588  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0202 21:48:15.441804  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0202 21:48:15.532165  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0202 21:48:15.635235  414960 api_server.go:51] waiting for apiserver process to appear ...
	I0202 21:48:15.635306  414960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0202 21:48:15.654099  414960 api_server.go:71] duration metric: took 18.866933ms to wait for apiserver process to appear ...
	I0202 21:48:15.654120  414960 api_server.go:87] waiting for apiserver healthz status ...
	I0202 21:48:15.654132  414960 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0202 21:48:15.659191  414960 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0202 21:48:15.666098  414960 api_server.go:140] control plane version: v1.23.2
	I0202 21:48:15.666110  414960 api_server.go:130] duration metric: took 11.98543ms to wait for apiserver health ...
	I0202 21:48:15.666117  414960 cni.go:93] Creating CNI manager for ""
	I0202 21:48:15.666122  414960 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 21:48:15.666127  414960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0202 21:48:15.717098  414960 system_pods.go:59] 7 kube-system pods found
	I0202 21:48:15.717120  414960 system_pods.go:61] "coredns-64897985d-qmcxc" [d0448ec2-8c41-4697-9419-17bc3267ec06] Running
	I0202 21:48:15.717134  414960 system_pods.go:61] "etcd-functional-20220202214710-386638" [87c6f3be-757f-49cc-ac76-10a89cfe3e44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0202 21:48:15.717142  414960 system_pods.go:61] "kube-apiserver-functional-20220202214710-386638" [be7b8d8a-56b0-48f5-b841-e20b68886d3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0202 21:48:15.717152  414960 system_pods.go:61] "kube-controller-manager-functional-20220202214710-386638" [d9d2c229-c9e8-4801-be93-bc91260f9aed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0202 21:48:15.717156  414960 system_pods.go:61] "kube-proxy-c2lnh" [2e835a4a-ce75-4ea4-93dd-0473663c28e1] Running
	I0202 21:48:15.717166  414960 system_pods.go:61] "kube-scheduler-functional-20220202214710-386638" [b85c05bb-5b5e-4b6d-8bbb-0eb4ee77e3ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0202 21:48:15.717174  414960 system_pods.go:61] "storage-provisioner" [4660a715-0b0d-419a-a7b1-650bf4a8466f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0202 21:48:15.717180  414960 system_pods.go:74] duration metric: took 51.048746ms to wait for pod list to return data ...
	I0202 21:48:15.717189  414960 node_conditions.go:102] verifying NodePressure condition ...
	I0202 21:48:15.721095  414960 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0202 21:48:15.721111  414960 node_conditions.go:123] node cpu capacity is 8
	I0202 21:48:15.721123  414960 node_conditions.go:105] duration metric: took 3.930275ms to run NodePressure ...
	I0202 21:48:15.721141  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0202 21:48:16.140968  414960 kubeadm.go:732] waiting for restarted kubelet to initialise ...
	I0202 21:48:16.144637  414960 kubeadm.go:747] kubelet initialised
	I0202 21:48:16.144643  414960 kubeadm.go:748] duration metric: took 3.664967ms waiting for restarted kubelet to initialise ...
	I0202 21:48:16.144650  414960 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 21:48:16.148452  414960 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-qmcxc" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:16.155988  414960 pod_ready.go:97] node "functional-20220202214710-386638" hosting pod "coredns-64897985d-qmcxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.155997  414960 pod_ready.go:81] duration metric: took 7.533282ms waiting for pod "coredns-64897985d-qmcxc" in "kube-system" namespace to be "Ready" ...
	E0202 21:48:16.156007  414960 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220202214710-386638" hosting pod "coredns-64897985d-qmcxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.156034  414960 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:16.208387  414960 pod_ready.go:97] node "functional-20220202214710-386638" hosting pod "etcd-functional-20220202214710-386638" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.208400  414960 pod_ready.go:81] duration metric: took 52.359418ms waiting for pod "etcd-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	E0202 21:48:16.208409  414960 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220202214710-386638" hosting pod "etcd-functional-20220202214710-386638" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.208430  414960 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:16.212992  414960 pod_ready.go:97] node "functional-20220202214710-386638" hosting pod "kube-apiserver-functional-20220202214710-386638" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.213003  414960 pod_ready.go:81] duration metric: took 4.567506ms waiting for pod "kube-apiserver-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	E0202 21:48:16.213010  414960 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220202214710-386638" hosting pod "kube-apiserver-functional-20220202214710-386638" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.213026  414960 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:16.216863  414960 pod_ready.go:97] node "functional-20220202214710-386638" hosting pod "kube-controller-manager-functional-20220202214710-386638" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.216873  414960 pod_ready.go:81] duration metric: took 3.840889ms waiting for pod "kube-controller-manager-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	E0202 21:48:16.216884  414960 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220202214710-386638" hosting pod "kube-controller-manager-functional-20220202214710-386638" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220202214710-386638" has status "Ready":"False"
	I0202 21:48:16.216903  414960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c2lnh" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:16.544648  414960 pod_ready.go:92] pod "kube-proxy-c2lnh" in "kube-system" namespace has status "Ready":"True"
	I0202 21:48:16.544656  414960 pod_ready.go:81] duration metric: took 327.747425ms waiting for pod "kube-proxy-c2lnh" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:16.544665  414960 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:18.959820  414960 pod_ready.go:97] error getting pod "kube-scheduler-functional-20220202214710-386638" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220202214710-386638": dial tcp 192.168.49.2:8441: connect: connection refused
	I0202 21:48:18.959840  414960 pod_ready.go:81] duration metric: took 2.41517032s waiting for pod "kube-scheduler-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	E0202 21:48:18.959848  414960 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-20220202214710-386638" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220202214710-386638": dial tcp 192.168.49.2:8441: connect: connection refused
	I0202 21:48:18.959873  414960 pod_ready.go:38] duration metric: took 2.815215375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 21:48:18.959896  414960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0202 21:48:18.976105  414960 kubeadm.go:756] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0202 21:48:18.976119  414960 kubeadm.go:604] restartCluster took 10.48435701s
	I0202 21:48:18.976124  414960 kubeadm.go:392] StartCluster complete in 10.520908226s
	I0202 21:48:18.976136  414960 settings.go:142] acquiring lock: {Name:mkc564df8104e4c2326cd37cd909420c5fd7241d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 21:48:18.976222  414960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 21:48:18.976747  414960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mkd9197ef7cab52290ec1513b45875905284aec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0202 21:48:18.978668  414960 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	W0202 21:48:19.479803  414960 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0202 21:48:21.825394  414960 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20220202214710-386638" rescaled to 1
	I0202 21:48:21.825445  414960 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 21:48:21.827602  414960 out.go:176] * Verifying Kubernetes components...
	I0202 21:48:21.827671  414960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 21:48:21.825587  414960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0202 21:48:21.825603  414960 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0202 21:48:21.827805  414960 addons.go:65] Setting storage-provisioner=true in profile "functional-20220202214710-386638"
	I0202 21:48:21.827820  414960 addons.go:153] Setting addon storage-provisioner=true in "functional-20220202214710-386638"
	W0202 21:48:21.827825  414960 addons.go:165] addon storage-provisioner should already be in state true
	I0202 21:48:21.827850  414960 host.go:66] Checking if "functional-20220202214710-386638" exists ...
	I0202 21:48:21.828380  414960 cli_runner.go:133] Run: docker container inspect functional-20220202214710-386638 --format={{.State.Status}}
	I0202 21:48:21.825862  414960 config.go:176] Loaded profile config "functional-20220202214710-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 21:48:21.828625  414960 addons.go:65] Setting default-storageclass=true in profile "functional-20220202214710-386638"
	I0202 21:48:21.828643  414960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20220202214710-386638"
	I0202 21:48:21.828959  414960 cli_runner.go:133] Run: docker container inspect functional-20220202214710-386638 --format={{.State.Status}}
	I0202 21:48:21.868236  414960 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0202 21:48:21.868559  414960 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 21:48:21.868569  414960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0202 21:48:21.868614  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:21.898343  414960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/functional-20220202214710-386638/id_rsa Username:docker}
	I0202 21:48:22.008648  414960 addons.go:153] Setting addon default-storageclass=true in "functional-20220202214710-386638"
	W0202 21:48:22.008661  414960 addons.go:165] addon default-storageclass should already be in state true
	I0202 21:48:22.008692  414960 host.go:66] Checking if "functional-20220202214710-386638" exists ...
	I0202 21:48:22.009241  414960 cli_runner.go:133] Run: docker container inspect functional-20220202214710-386638 --format={{.State.Status}}
	I0202 21:48:22.050540  414960 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0202 21:48:22.050553  414960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0202 21:48:22.050614  414960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220202214710-386638
	I0202 21:48:22.080786  414960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/functional-20220202214710-386638/id_rsa Username:docker}
	I0202 21:48:22.117510  414960 node_ready.go:35] waiting up to 6m0s for node "functional-20220202214710-386638" to be "Ready" ...
	I0202 21:48:22.117531  414960 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0202 21:48:22.118379  414960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 21:48:22.120164  414960 node_ready.go:49] node "functional-20220202214710-386638" has status "Ready":"True"
	I0202 21:48:22.120171  414960 node_ready.go:38] duration metric: took 2.634646ms waiting for node "functional-20220202214710-386638" to be "Ready" ...
	I0202 21:48:22.120182  414960 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 21:48:22.126639  414960 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-qmcxc" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:22.130735  414960 pod_ready.go:92] pod "coredns-64897985d-qmcxc" in "kube-system" namespace has status "Ready":"True"
	I0202 21:48:22.130743  414960 pod_ready.go:81] duration metric: took 4.09023ms waiting for pod "coredns-64897985d-qmcxc" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:22.130755  414960 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:22.220632  414960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0202 21:48:22.946163  414960 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0202 21:48:22.946191  414960 addons.go:417] enableAddons completed in 1.120590066s
	I0202 21:48:24.140288  414960 pod_ready.go:102] pod "etcd-functional-20220202214710-386638" in "kube-system" namespace has status "Ready":"False"
	I0202 21:48:26.640325  414960 pod_ready.go:102] pod "etcd-functional-20220202214710-386638" in "kube-system" namespace has status "Ready":"False"
	I0202 21:48:29.140081  414960 pod_ready.go:102] pod "etcd-functional-20220202214710-386638" in "kube-system" namespace has status "Ready":"False"
	I0202 21:48:29.639231  414960 pod_ready.go:92] pod "etcd-functional-20220202214710-386638" in "kube-system" namespace has status "Ready":"True"
	I0202 21:48:29.639252  414960 pod_ready.go:81] duration metric: took 7.508490778s waiting for pod "etcd-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:29.639260  414960 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:29.643022  414960 pod_ready.go:92] pod "kube-controller-manager-functional-20220202214710-386638" in "kube-system" namespace has status "Ready":"True"
	I0202 21:48:29.643030  414960 pod_ready.go:81] duration metric: took 3.762921ms waiting for pod "kube-controller-manager-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:29.643041  414960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2lnh" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:29.646728  414960 pod_ready.go:92] pod "kube-proxy-c2lnh" in "kube-system" namespace has status "Ready":"True"
	I0202 21:48:29.646734  414960 pod_ready.go:81] duration metric: took 3.687602ms waiting for pod "kube-proxy-c2lnh" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:29.646740  414960 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:29.649985  414960 pod_ready.go:92] pod "kube-scheduler-functional-20220202214710-386638" in "kube-system" namespace has status "Ready":"True"
	I0202 21:48:29.649991  414960 pod_ready.go:81] duration metric: took 3.246333ms waiting for pod "kube-scheduler-functional-20220202214710-386638" in "kube-system" namespace to be "Ready" ...
	I0202 21:48:29.649998  414960 pod_ready.go:38] duration metric: took 7.529806488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 21:48:29.650016  414960 api_server.go:51] waiting for apiserver process to appear ...
	I0202 21:48:29.650057  414960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0202 21:48:29.669840  414960 api_server.go:71] duration metric: took 7.844359904s to wait for apiserver process to appear ...
	I0202 21:48:29.669852  414960 api_server.go:87] waiting for apiserver healthz status ...
	I0202 21:48:29.669859  414960 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0202 21:48:29.674217  414960 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0202 21:48:29.674899  414960 api_server.go:140] control plane version: v1.23.2
	I0202 21:48:29.674908  414960 api_server.go:130] duration metric: took 5.052911ms to wait for apiserver health ...
	I0202 21:48:29.674913  414960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0202 21:48:29.678692  414960 system_pods.go:59] 7 kube-system pods found
	I0202 21:48:29.678704  414960 system_pods.go:61] "coredns-64897985d-qmcxc" [d0448ec2-8c41-4697-9419-17bc3267ec06] Running
	I0202 21:48:29.678708  414960 system_pods.go:61] "etcd-functional-20220202214710-386638" [87c6f3be-757f-49cc-ac76-10a89cfe3e44] Running
	I0202 21:48:29.678743  414960 system_pods.go:61] "kube-apiserver-functional-20220202214710-386638" [b7c7a9e3-1a91-4c68-b4be-8262631e90e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0202 21:48:29.678746  414960 system_pods.go:61] "kube-controller-manager-functional-20220202214710-386638" [d9d2c229-c9e8-4801-be93-bc91260f9aed] Running
	I0202 21:48:29.678751  414960 system_pods.go:61] "kube-proxy-c2lnh" [2e835a4a-ce75-4ea4-93dd-0473663c28e1] Running
	I0202 21:48:29.678754  414960 system_pods.go:61] "kube-scheduler-functional-20220202214710-386638" [b85c05bb-5b5e-4b6d-8bbb-0eb4ee77e3ba] Running
	I0202 21:48:29.678757  414960 system_pods.go:61] "storage-provisioner" [4660a715-0b0d-419a-a7b1-650bf4a8466f] Running
	I0202 21:48:29.678760  414960 system_pods.go:74] duration metric: took 3.843828ms to wait for pod list to return data ...
	I0202 21:48:29.678764  414960 default_sa.go:34] waiting for default service account to be created ...
	I0202 21:48:29.680686  414960 default_sa.go:45] found service account: "default"
	I0202 21:48:29.680693  414960 default_sa.go:55] duration metric: took 1.923488ms for default service account to be created ...
	I0202 21:48:29.680698  414960 system_pods.go:116] waiting for k8s-apps to be running ...
	I0202 21:48:29.839310  414960 system_pods.go:86] 7 kube-system pods found
	I0202 21:48:29.839328  414960 system_pods.go:89] "coredns-64897985d-qmcxc" [d0448ec2-8c41-4697-9419-17bc3267ec06] Running
	I0202 21:48:29.839333  414960 system_pods.go:89] "etcd-functional-20220202214710-386638" [87c6f3be-757f-49cc-ac76-10a89cfe3e44] Running
	I0202 21:48:29.839339  414960 system_pods.go:89] "kube-apiserver-functional-20220202214710-386638" [b7c7a9e3-1a91-4c68-b4be-8262631e90e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0202 21:48:29.839343  414960 system_pods.go:89] "kube-controller-manager-functional-20220202214710-386638" [d9d2c229-c9e8-4801-be93-bc91260f9aed] Running
	I0202 21:48:29.839346  414960 system_pods.go:89] "kube-proxy-c2lnh" [2e835a4a-ce75-4ea4-93dd-0473663c28e1] Running
	I0202 21:48:29.839350  414960 system_pods.go:89] "kube-scheduler-functional-20220202214710-386638" [b85c05bb-5b5e-4b6d-8bbb-0eb4ee77e3ba] Running
	I0202 21:48:29.839352  414960 system_pods.go:89] "storage-provisioner" [4660a715-0b0d-419a-a7b1-650bf4a8466f] Running
	I0202 21:48:29.839356  414960 system_pods.go:126] duration metric: took 158.655218ms to wait for k8s-apps to be running ...
	I0202 21:48:29.839362  414960 system_svc.go:44] waiting for kubelet service to be running ....
	I0202 21:48:29.839400  414960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 21:48:29.848770  414960 system_svc.go:56] duration metric: took 9.399604ms WaitForService to wait for kubelet.
	I0202 21:48:29.848782  414960 kubeadm.go:547] duration metric: took 8.023308795s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0202 21:48:29.848802  414960 node_conditions.go:102] verifying NodePressure condition ...
	I0202 21:48:30.038006  414960 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0202 21:48:30.038018  414960 node_conditions.go:123] node cpu capacity is 8
	I0202 21:48:30.038027  414960 node_conditions.go:105] duration metric: took 189.221761ms to run NodePressure ...
	I0202 21:48:30.038035  414960 start.go:213] waiting for startup goroutines ...
	I0202 21:48:30.071991  414960 start.go:496] kubectl: 1.23.3, cluster: 1.23.2 (minor skew: 0)
	I0202 21:48:30.074312  414960 out.go:176] * Done! kubectl is now configured to use "functional-20220202214710-386638" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-02-02 21:47:20 UTC, end at Wed 2022-02-02 21:48:31 UTC. --
	Feb 02 21:47:22 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:47:22.888665297Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12
	Feb 02 21:47:22 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:47:22.888728122Z" level=info msg="Daemon has completed initialization"
	Feb 02 21:47:22 functional-20220202214710-386638 systemd[1]: Started Docker Application Container Engine.
	Feb 02 21:47:22 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:47:22.904877194Z" level=info msg="API listen on [::]:2376"
	Feb 02 21:47:22 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:47:22.908272077Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 02 21:47:54 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:47:54.484839101Z" level=info msg="ignoring event" container=a320f64aeec702dc71bdc7c60e78e3a71aa7942e610d2ae2024382f7a5dc5ce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:47:54 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:47:54.536903771Z" level=info msg="ignoring event" container=4340f72c87f173f19edf721c5e724032c5888d6b91b9eb00453f0d74195bb14c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.597885528Z" level=info msg="ignoring event" container=814d5d39e5e79ef44dbf90ae3d9de78ce4ef0e05ee5b0adb2ecf105272bdd3ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.731665035Z" level=info msg="ignoring event" container=66081e6ef12808f66181c38c236719ade987c4c4da621d6c5406a2409adc276d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.807734311Z" level=info msg="ignoring event" container=984b0595435d870a47950a1bca22f6539fbc7908b0b144be86070dde4def88dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.810288765Z" level=info msg="ignoring event" container=45922d9a03728f9191db73690ca5a4081bfe0374605ad74ce73bf0bee06132bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.810328011Z" level=info msg="ignoring event" container=721285b959af9e497fb4e24130423ccf2f5adde4e06e8bc42c05dfa48cf58c62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.810359197Z" level=info msg="ignoring event" container=94d0ebbbbe556f8e9ce639e6a52efb784af6431a8267a5fd6b59026f5ca01c7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.810374080Z" level=info msg="ignoring event" container=01a90a21bd5819d1f5a7a385e1ab6e81c14b408e8374a8fe74bf6325885dcd97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.811418177Z" level=info msg="ignoring event" container=12dd19e847f68236e41bd00c2fb502a509a247929ecb10cf6fd39f1b948887c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.814780561Z" level=info msg="ignoring event" container=69b3a7dc1d7ce2758a45406798d36e80f79d253d4bfffbde264d55d412dcf802 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.817718034Z" level=info msg="ignoring event" container=8d4ef263a0deb5e45a7204535d7ddab5b409f7cb1b43b39ac641392a3c83f172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:08 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:08.818148312Z" level=info msg="ignoring event" container=3935df79b1bfdcdd30d787c5d7180068cfa22a855a51563c16d4cfbebad072c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:09 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:09.627218393Z" level=info msg="ignoring event" container=a93f4106bdd49ee81765b71dee3de72773e17c93f5eb8b038cc4c7636e59d3aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:09 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:09.637333860Z" level=info msg="ignoring event" container=7f57939fd732a2da4dbf3d35cfc17a89f1b6876d8b55204e828f3c8bd609e1fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:10 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:10.419917073Z" level=info msg="ignoring event" container=4e9cce7a10fc55f2dbbdc4177b6341887968004fb441bca08dc777522255a5c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:13 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:13.601105891Z" level=info msg="ignoring event" container=57c890a8160058a44730897ed106714e788cd210e12a754bd611631fd7714cf4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:17 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:17.248090208Z" level=info msg="ignoring event" container=1251a66fdbb5f805a7e800f9eeabecfa04f9ee346e6a984c81966f441a7b4fd7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:17 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:17.946512627Z" level=info msg="ignoring event" container=3ca396f74545111c90e718c23955827968c389a8dab5bc19096f90e47919d1d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 02 21:48:18 functional-20220202214710-386638 dockerd[458]: time="2022-02-02T21:48:18.009323502Z" level=info msg="ignoring event" container=26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	eebcc13f80f29       6e38f40d628db       8 seconds ago        Running             storage-provisioner       2                   afe5d50c0cb45
	2cfd7c3d1963c       a4ca41631cc7a       8 seconds ago        Running             coredns                   1                   318303fa9030a
	16f643730c8ff       8a0228dd6a683       12 seconds ago       Running             kube-apiserver            1                   d994562250771
	1251a66fdbb5f       8a0228dd6a683       14 seconds ago       Exited              kube-apiserver            0                   d994562250771
	161c307a7ab39       6114d758d6d16       21 seconds ago       Running             kube-scheduler            1                   9cc7e9b544d8d
	0e9f9533ff722       4783639ba7e03       21 seconds ago       Running             kube-controller-manager   1                   6a6dcce0faede
	4e9cce7a10fc5       6e38f40d628db       21 seconds ago       Exited              storage-provisioner       1                   afe5d50c0cb45
	bf69761828687       25f8c7f3da61c       21 seconds ago       Running             etcd                      1                   15552edcbb67d
	fab9e8c7dd3f4       d922ca3da64b3       22 seconds ago       Running             kube-proxy                1                   1411d0a8c5a17
	57c890a816005       a4ca41631cc7a       42 seconds ago       Exited              coredns                   0                   984b0595435d8
	8d4ef263a0deb       d922ca3da64b3       43 seconds ago       Exited              kube-proxy                0                   69b3a7dc1d7ce
	01a90a21bd581       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   94d0ebbbbe556
	7f57939fd732a       6114d758d6d16       About a minute ago   Exited              kube-scheduler            0                   3935df79b1bfd
	66081e6ef1280       4783639ba7e03       About a minute ago   Exited              kube-controller-manager   0                   12dd19e847f68
	
	* 
	* ==> coredns [2cfd7c3d1963] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [57c890a81600] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220202214710-386638
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220202214710-386638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82
	                    minikube.k8s.io/name=functional-20220202214710-386638
	                    minikube.k8s.io/updated_at=2022_02_02T21_47_35_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Feb 2022 21:47:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220202214710-386638
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Feb 2022 21:48:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Feb 2022 21:48:16 +0000   Wed, 02 Feb 2022 21:47:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Feb 2022 21:48:16 +0000   Wed, 02 Feb 2022 21:47:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Feb 2022 21:48:16 +0000   Wed, 02 Feb 2022 21:47:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Feb 2022 21:48:16 +0000   Wed, 02 Feb 2022 21:48:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220202214710-386638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32874648Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32874648Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                bf35b03a-6495-476c-9c20-23113ad939ba
	  Boot ID:                    83bfc470-4931-4701-bbec-fbf02121ac1f
	  Kernel Version:             5.11.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.12
	  Kubelet Version:            v1.23.2
	  Kube-Proxy Version:         v1.23.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-qmcxc                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     43s
	  kube-system                 etcd-functional-20220202214710-386638                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         56s
	  kube-system                 kube-apiserver-functional-20220202214710-386638             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-controller-manager-functional-20220202214710-386638    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-proxy-c2lnh                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-scheduler-functional-20220202214710-386638             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 42s   kube-proxy  
	  Normal  Starting                 15s   kube-proxy  
	  Normal  NodeHasSufficientMemory  56s   kubelet     Node functional-20220202214710-386638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s   kubelet     Node functional-20220202214710-386638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s   kubelet     Node functional-20220202214710-386638 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  56s   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 56s   kubelet     Starting kubelet.
	  Normal  NodeReady                46s   kubelet     Node functional-20220202214710-386638 status is now: NodeReady
	  Normal  Starting                 16s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  16s   kubelet     Node functional-20220202214710-386638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s   kubelet     Node functional-20220202214710-386638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s   kubelet     Node functional-20220202214710-386638 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15s   kubelet     Node functional-20220202214710-386638 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                15s   kubelet     Node functional-20220202214710-386638 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 73 c4 17 2b 39 08 06
	[ +14.700161] IPv4: martian source 10.85.0.26 from 10.85.0.26, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2a 01 b9 03 03 5a 08 06
	[ +16.216625] IPv4: martian source 10.85.0.27 from 10.85.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee b5 9c db 82 76 08 06
	[Feb 2 21:36] IPv4: martian source 10.85.0.28 from 10.85.0.28, on dev eth0
	[  +0.000027] ll header: 00000000: ff ff ff ff ff ff 96 6b 63 45 2c d8 08 06
	[ +13.105020] IPv4: martian source 10.85.0.29 from 10.85.0.29, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 50 1c 0f 4e 40 08 06
	[ +15.030049] IPv4: martian source 10.85.0.30 from 10.85.0.30, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 22 15 5f e1 47 08 06
	[ +14.546166] IPv4: martian source 10.85.0.31 from 10.85.0.31, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f4 e4 d0 20 71 08 06
	[Feb 2 21:37] IPv4: martian source 10.85.0.32 from 10.85.0.32, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 a3 62 db 9e 4a 08 06
	[ +13.742386] IPv4: martian source 10.85.0.33 from 10.85.0.33, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 2d 33 68 1b 46 08 06
	[ +13.053862] IPv4: martian source 10.85.0.34 from 10.85.0.34, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 39 ec 3a a8 2e 08 06
	[Feb 2 21:38] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 08 10 7c e7 12 08 06
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0a 08 10 7c e7 12 08 06
	[ +17.683296] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e 53 42 ef 9d 27 08 06
	
	* 
	* ==> etcd [01a90a21bd58] <==
	* {"level":"info","ts":"2022-02-02T21:47:29.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-02-02T21:47:29.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-02-02T21:47:29.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-02-02T21:47:29.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-02-02T21:47:29.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-02-02T21:47:29.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-02-02T21:47:29.615Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220202214710-386638 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-02-02T21:47:29.615Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-02T21:47:29.616Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-02-02T21:47:29.617Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-02-02T21:48:08.626Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-02-02T21:48:08.626Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220202214710-386638","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/02/02 21:48:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/02/02 21:48:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-02-02T21:48:08.637Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-02-02T21:48:08.638Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-02-02T21:48:08.706Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-02-02T21:48:08.706Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220202214710-386638","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [bf6976182868] <==
	* {"level":"info","ts":"2022-02-02T21:48:10.536Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-02-02T21:48:10.608Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-02-02T21:48:10.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-02-02T21:48:10.608Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-02-02T21:48:10.609Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-02T21:48:10.609Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-02-02T21:48:10.610Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-02-02T21:48:10.610Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-02-02T21:48:10.610Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-02-02T21:48:10.610Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-02-02T21:48:10.610Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-02-02T21:48:11.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-02-02T21:48:11.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-02-02T21:48:11.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-02-02T21:48:11.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-02-02T21:48:11.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-02-02T21:48:11.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-02-02T21:48:11.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-02-02T21:48:11.532Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220202214710-386638 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-02-02T21:48:11.532Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-02-02T21:48:11.532Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-02-02T21:48:11.532Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-02-02T21:48:11.532Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-02-02T21:48:11.533Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-02-02T21:48:11.533Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:48:31 up  5:31,  0 users,  load average: 1.26, 1.39, 1.01
	Linux functional-20220202214710-386638 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [1251a66fdbb5] <==
	* I0202 21:48:17.229597       1 server.go:565] external host was not specified, using 192.168.49.2
	I0202 21:48:17.230081       1 server.go:172] Version: v1.23.2
	E0202 21:48:17.230398       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-apiserver [16f643730c8f] <==
	* I0202 21:48:21.768644       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0202 21:48:21.768669       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0202 21:48:21.768690       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0202 21:48:21.806641       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0202 21:48:21.806677       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0202 21:48:21.806711       1 autoregister_controller.go:141] Starting autoregister controller
	I0202 21:48:21.806716       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0202 21:48:21.806976       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0202 21:48:21.806995       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0202 21:48:21.811356       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0202 21:48:21.822692       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0202 21:48:21.829256       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0202 21:48:21.909180       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0202 21:48:21.918794       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0202 21:48:21.918893       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0202 21:48:21.920449       1 cache.go:39] Caches are synced for autoregister controller
	I0202 21:48:22.007031       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0202 21:48:22.007064       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0202 21:48:22.007030       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0202 21:48:22.765603       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0202 21:48:22.765631       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0202 21:48:22.810807       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0202 21:48:25.918114       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0202 21:48:25.922555       1 controller.go:611] quota admission added evaluator for: endpoints
	I0202 21:48:26.120735       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	
	* 
	* ==> kube-controller-manager [0e9f9533ff72] <==
	* I0202 21:48:25.949140       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0202 21:48:25.949142       1 event.go:294] "Event occurred" object="functional-20220202214710-386638" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220202214710-386638 event: Registered Node functional-20220202214710-386638 in Controller"
	I0202 21:48:25.950076       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0202 21:48:25.951282       1 shared_informer.go:247] Caches are synced for node 
	I0202 21:48:25.951311       1 range_allocator.go:173] Starting range CIDR allocator
	I0202 21:48:25.951317       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0202 21:48:25.951326       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0202 21:48:25.955571       1 shared_informer.go:247] Caches are synced for HPA 
	I0202 21:48:25.955600       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0202 21:48:25.958994       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0202 21:48:25.960133       1 shared_informer.go:247] Caches are synced for GC 
	I0202 21:48:25.962349       1 shared_informer.go:247] Caches are synced for service account 
	I0202 21:48:25.963481       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0202 21:48:25.969390       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0202 21:48:26.036076       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0202 21:48:26.125682       1 shared_informer.go:247] Caches are synced for resource quota 
	I0202 21:48:26.142130       1 shared_informer.go:247] Caches are synced for attach detach 
	I0202 21:48:26.143033       1 shared_informer.go:247] Caches are synced for disruption 
	I0202 21:48:26.143053       1 disruption.go:371] Sending events to api server.
	I0202 21:48:26.157464       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0202 21:48:26.162455       1 shared_informer.go:247] Caches are synced for deployment 
	I0202 21:48:26.173671       1 shared_informer.go:247] Caches are synced for resource quota 
	I0202 21:48:26.585704       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0202 21:48:26.659811       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0202 21:48:26.659833       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [66081e6ef128] <==
	* I0202 21:47:47.242471       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0202 21:47:47.242525       1 event.go:294] "Event occurred" object="functional-20220202214710-386638" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220202214710-386638 event: Registered Node functional-20220202214710-386638 in Controller"
	I0202 21:47:47.242539       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0202 21:47:47.242714       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0202 21:47:47.242736       1 shared_informer.go:247] Caches are synced for GC 
	I0202 21:47:47.243338       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0202 21:47:47.244394       1 shared_informer.go:247] Caches are synced for stateful set 
	I0202 21:47:47.250897       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c2lnh"
	I0202 21:47:47.259157       1 shared_informer.go:247] Caches are synced for PV protection 
	I0202 21:47:47.294089       1 shared_informer.go:247] Caches are synced for expand 
	I0202 21:47:47.342871       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0202 21:47:47.347139       1 shared_informer.go:247] Caches are synced for resource quota 
	I0202 21:47:47.353350       1 shared_informer.go:247] Caches are synced for resource quota 
	I0202 21:47:47.363520       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0202 21:47:47.392398       1 shared_informer.go:247] Caches are synced for job 
	I0202 21:47:47.393291       1 shared_informer.go:247] Caches are synced for cronjob 
	I0202 21:47:47.454039       1 shared_informer.go:247] Caches are synced for attach detach 
	I0202 21:47:47.866458       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0202 21:47:47.878620       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0202 21:47:47.878643       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0202 21:47:47.997401       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0202 21:47:48.115331       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0202 21:47:48.248767       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-gh6d4"
	I0202 21:47:48.252723       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-qmcxc"
	I0202 21:47:48.269673       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-gh6d4"
	
	* 
	* ==> kube-proxy [8d4ef263a0de] <==
	* I0202 21:47:48.556611       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0202 21:47:48.556670       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0202 21:47:48.556703       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0202 21:47:48.630950       1 server_others.go:206] "Using iptables Proxier"
	I0202 21:47:48.630989       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0202 21:47:48.631000       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0202 21:47:48.631016       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0202 21:47:48.631420       1 server.go:656] "Version info" version="v1.23.2"
	I0202 21:47:48.632277       1 config.go:226] "Starting endpoint slice config controller"
	I0202 21:47:48.632315       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0202 21:47:48.632352       1 config.go:317] "Starting service config controller"
	I0202 21:47:48.632362       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0202 21:47:48.733340       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0202 21:47:48.733450       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [fab9e8c7dd3f] <==
	* E0202 21:48:10.512293       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220202214710-386638": dial tcp 192.168.49.2:8441: connect: connection refused
	E0202 21:48:13.218466       1 node.go:152] Failed to retrieve node info: nodes "functional-20220202214710-386638" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0202 21:48:15.310051       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0202 21:48:15.310286       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0202 21:48:15.310479       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0202 21:48:15.340977       1 server_others.go:206] "Using iptables Proxier"
	I0202 21:48:15.341007       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0202 21:48:15.341016       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0202 21:48:15.341034       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0202 21:48:15.341462       1 server.go:656] "Version info" version="v1.23.2"
	I0202 21:48:15.407833       1 config.go:317] "Starting service config controller"
	I0202 21:48:15.407857       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0202 21:48:15.408089       1 config.go:226] "Starting endpoint slice config controller"
	I0202 21:48:15.408150       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0202 21:48:15.508920       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0202 21:48:15.509044       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [161c307a7ab3] <==
	* W0202 21:48:13.128791       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0202 21:48:13.128800       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0202 21:48:13.217797       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.2"
	I0202 21:48:13.220150       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0202 21:48:13.224382       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0202 21:48:13.224493       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0202 21:48:13.224527       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0202 21:48:13.236086       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0202 21:48:13.236124       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0202 21:48:13.324848       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0202 21:48:21.822002       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0202 21:48:21.822739       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0202 21:48:21.822815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0202 21:48:21.822844       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0202 21:48:21.822873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0202 21:48:21.822892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0202 21:48:21.822911       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0202 21:48:21.822936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0202 21:48:21.822964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0202 21:48:21.822984       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0202 21:48:21.823010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0202 21:48:21.823055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0202 21:48:21.823080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0202 21:48:21.823120       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0202 21:48:21.823148       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	
	* 
	* ==> kube-scheduler [7f57939fd732] <==
	* E0202 21:47:32.223941       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0202 21:47:32.224017       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0202 21:47:32.224039       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0202 21:47:32.224109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0202 21:47:32.224151       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0202 21:47:32.224264       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0202 21:47:32.224282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0202 21:47:33.145949       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0202 21:47:33.145987       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0202 21:47:33.158906       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0202 21:47:33.158947       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0202 21:47:33.163812       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0202 21:47:33.163842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0202 21:47:33.164491       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0202 21:47:33.164531       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0202 21:47:33.194774       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0202 21:47:33.194807       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0202 21:47:33.209773       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0202 21:47:33.209797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0202 21:47:33.408342       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0202 21:47:33.408371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0202 21:47:33.920827       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0202 21:48:08.710725       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0202 21:48:08.711166       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0202 21:48:08.711762       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-02-02 21:47:20 UTC, end at Wed 2022-02-02 21:48:31 UTC. --
	Feb 02 21:48:19 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:19.338400    5621 scope.go:110] "RemoveContainer" containerID="1251a66fdbb5f805a7e800f9eeabecfa04f9ee346e6a984c81966f441a7b4fd7"
	Feb 02 21:48:19 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:19.920081    5621 remote_runtime.go:479] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec" containerID="26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec"
	Feb 02 21:48:19 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:19.920147    5621 kuberuntime_container.go:719] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec" pod="kube-system/kube-apiserver-functional-20220202214710-386638" podUID=ecd64b85c03f75ef813989b5d080682a containerName="kube-apiserver" containerID="docker://26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec" gracePeriod=1
	Feb 02 21:48:19 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:19.920175    5621 kuberuntime_container.go:744] "Kill container failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec" pod="kube-system/kube-apiserver-functional-20220202214710-386638" podUID=ecd64b85c03f75ef813989b5d080682a containerName="kube-apiserver" containerID={Type:docker ID:26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec}
	Feb 02 21:48:19 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:19.921470    5621 kubelet.go:1777] failed to "KillContainer" for "kube-apiserver" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: 26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec"
	Feb 02 21:48:19 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:19.921522    5621 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-apiserver\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: 26bb951a2f1e150bc66a6d600f18eb919d6a3a49f67b3dc307cb5515e06348ec\"" pod="kube-system/kube-apiserver-functional-20220202214710-386638" podUID=ecd64b85c03f75ef813989b5d080682a
	Feb 02 21:48:19 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:19.922746    5621 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ecd64b85c03f75ef813989b5d080682a path="/var/lib/kubelet/pods/ecd64b85c03f75ef813989b5d080682a/volumes"
	Feb 02 21:48:20 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:20.230722    5621 kubelet.go:1693] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20220202214710-386638" podUID=be7b8d8a-56b0-48f5-b841-e20b68886d3a
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: W0202 21:48:21.822002    5621 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20220202214710-386638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.822047    5621 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20220202214710-386638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.822957    5621 projected.go:199] Error preparing data for projected volume kube-api-access-lt99n for pod kube-system/coredns-64897985d-qmcxc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20220202214710-386638" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.823049    5621 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d0448ec2-8c41-4697-9419-17bc3267ec06-kube-api-access-lt99n podName:d0448ec2-8c41-4697-9419-17bc3267ec06 nodeName:}" failed. No retries permitted until 2022-02-02 21:48:22.823023056 +0000 UTC m=+7.381277125 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lt99n" (UniqueName: "kubernetes.io/projected/d0448ec2-8c41-4697-9419-17bc3267ec06-kube-api-access-lt99n") pod "coredns-64897985d-qmcxc" (UID: "d0448ec2-8c41-4697-9419-17bc3267ec06") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20220202214710-386638" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.823119    5621 projected.go:199] Error preparing data for projected volume kube-api-access-xcpd7 for pod kube-system/kube-proxy-c2lnh: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20220202214710-386638" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.823157    5621 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2e835a4a-ce75-4ea4-93dd-0473663c28e1-kube-api-access-xcpd7 podName:2e835a4a-ce75-4ea4-93dd-0473663c28e1 nodeName:}" failed. No retries permitted until 2022-02-02 21:48:22.823144662 +0000 UTC m=+7.381398724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xcpd7" (UniqueName: "kubernetes.io/projected/2e835a4a-ce75-4ea4-93dd-0473663c28e1-kube-api-access-xcpd7") pod "kube-proxy-c2lnh" (UID: "2e835a4a-ce75-4ea4-93dd-0473663c28e1") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20220202214710-386638" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: W0202 21:48:21.824500    5621 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-20220202214710-386638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.824559    5621 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-20220202214710-386638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.824629    5621 projected.go:199] Error preparing data for projected volume kube-api-access-zq69t for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20220202214710-386638" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.824694    5621 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/4660a715-0b0d-419a-a7b1-650bf4a8466f-kube-api-access-zq69t podName:4660a715-0b0d-419a-a7b1-650bf4a8466f nodeName:}" failed. No retries permitted until 2022-02-02 21:48:22.824675098 +0000 UTC m=+7.382929159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zq69t" (UniqueName: "kubernetes.io/projected/4660a715-0b0d-419a-a7b1-650bf4a8466f-kube-api-access-zq69t") pod "storage-provisioner" (UID: "4660a715-0b0d-419a-a7b1-650bf4a8466f") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20220202214710-386638" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: W0202 21:48:21.824788    5621 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20220202214710-386638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:21 functional-20220202214710-386638 kubelet[5621]: E0202 21:48:21.824829    5621 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20220202214710-386638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220202214710-386638' and this object
	Feb 02 21:48:22 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:22.018560    5621 kubelet.go:1698] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-20220202214710-386638"
	Feb 02 21:48:23 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:23.248325    5621 kubelet.go:1693] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20220202214710-386638" podUID=be7b8d8a-56b0-48f5-b841-e20b68886d3a
	Feb 02 21:48:23 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:23.711928    5621 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-qmcxc through plugin: invalid network status for"
	Feb 02 21:48:23 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:23.742026    5621 scope.go:110] "RemoveContainer" containerID="4e9cce7a10fc55f2dbbdc4177b6341887968004fb441bca08dc777522255a5c8"
	Feb 02 21:48:24 functional-20220202214710-386638 kubelet[5621]: I0202 21:48:24.262101    5621 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-qmcxc through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [4e9cce7a10fc] <==
	* I0202 21:48:10.327653       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0202 21:48:10.330434       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [eebcc13f80f2] <==
	* I0202 21:48:23.851838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0202 21:48:23.858861       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0202 21:48:23.858897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20220202214710-386638 -n functional-20220202214710-386638
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20220202214710-386638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20220202214710-386638 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20220202214710-386638 describe pod : exit status 1 (39.003493ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context functional-20220202214710-386638 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (2.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:192: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --driver=docker  --container-runtime=docker: signal: killed (4.147884334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220202220601-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20220202220601-386638 in cluster NoKubernetes-20220202220601-386638
	* Pulling base image ...
	* Restarting existing docker container for "NoKubernetes-20220202220601-386638" ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:194: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --driver=docker  --container-runtime=docker" : signal: killed
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect NoKubernetes-20220202220601-386638
helpers_test.go:236: (dbg) docker inspect NoKubernetes-20220202220601-386638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57cc0b62ca143a1000f396158761ad0da28868e9a7256cfa8819530a263f6c1a",
	        "Created": "2022-02-02T22:10:46.136981829Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 603488,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-02T22:10:57.85501942Z",
	            "FinishedAt": "2022-02-02T22:10:56.051179053Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/57cc0b62ca143a1000f396158761ad0da28868e9a7256cfa8819530a263f6c1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57cc0b62ca143a1000f396158761ad0da28868e9a7256cfa8819530a263f6c1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/57cc0b62ca143a1000f396158761ad0da28868e9a7256cfa8819530a263f6c1a/hosts",
	        "LogPath": "/var/lib/docker/containers/57cc0b62ca143a1000f396158761ad0da28868e9a7256cfa8819530a263f6c1a/57cc0b62ca143a1000f396158761ad0da28868e9a7256cfa8819530a263f6c1a-json.log",
	        "Name": "/NoKubernetes-20220202220601-386638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-20220202220601-386638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-20220202220601-386638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e02d51437278c1adbef40d871f899875e3e8dd4b865ad6848e0fe12c0fe8bad-init/diff:/var/lib/docker/overlay2/d4663ead96d8fac7b028dde763b7445fdff56593784b7a04a0c4c7450b12ac8a/diff:/var/lib/docker/overlay2/f0c766a8d59c3075c44f5eaf54a88aef49ac3a770e6f1e3ac6ebd4004f5b70e2/diff:/var/lib/docker/overlay2/03f8cecf4339603da26d55f367194130430755e9c21844c70ce3d30bd8d5d776/diff:/var/lib/docker/overlay2/8c56519bb2995287e5231612d5ca3809d3ca82a08d3fb88d4bc3f28acb44c548/diff:/var/lib/docker/overlay2/cfdceedf4766b92de092fc07ad1b8f1f378126b680f71754acd32502d082ac4c/diff:/var/lib/docker/overlay2/243a3de48d24643b038a407d872fc1ebb1cca9719c882859b2e65a71ba051c3d/diff:/var/lib/docker/overlay2/73d9c1c910d6be4b6f1719f9ad777c50baedc16cb66f6167f61c34c3535d6aa8/diff:/var/lib/docker/overlay2/414a2e06f368b9a6893993643cc902550952ea16b431d03ef81177a67bcc6055/diff:/var/lib/docker/overlay2/237cb26dc1fb33617d977694b7036e843ca7076f249a87cedb9719f8bb852369/diff:/var/lib/docker/overlay2/f94a67
39f2f53cb0adbb52fccb7daf09ff60a575e9ef35eedbfae9a115cb0bee/diff:/var/lib/docker/overlay2/1a7b8bc08aeb75e64990bf84e55e245d3ccba13a7248844f2a2b41a179987edd/diff:/var/lib/docker/overlay2/9d6fe9ebc7ebbd17697484e59d73c2e56a57b9efd010b504af3e94f33693a302/diff:/var/lib/docker/overlay2/a6b04596431127c96091ac1a60b24c2efd5dc5925d3a6be2c7c991a40f0fba61/diff:/var/lib/docker/overlay2/ddffede76ffd319874db8f340cf399929a918323513065094964ebc981ccebe6/diff:/var/lib/docker/overlay2/873af33e16ed022cdbff8f367fac5f511da2edbe653c3a4df4b38f17018fde26/diff:/var/lib/docker/overlay2/49ecfae1413a927bd924c5c004499b9af18da6c25beffa6da10506397419e246/diff:/var/lib/docker/overlay2/8663e1a8bea6b4285860191688fcf3d3aa95f958547b7d2918feda19facc72d2/diff:/var/lib/docker/overlay2/96864535f6abf106236521f0aa4d98958c91533ecc34864088813a5d638d7a85/diff:/var/lib/docker/overlay2/3245e931c6f0447b1c6dd192323b06a5580c4cb9c80e63e19c886107effec1a8/diff:/var/lib/docker/overlay2/fbfc10643f3968343b6f304ba573ab22123857f0ac7bbdf796e69cc759ffcb01/diff:/var/lib/d
ocker/overlay2/008c499b0a1d502f449e9408eb9d7f0d3fd1f927c6fed14c2daf9128f2481a2e/diff:/var/lib/docker/overlay2/049cceba63d6fda39cec7c7c348ae0046d3bcfc9a354ef8c20d2cb0da0c6126e/diff:/var/lib/docker/overlay2/7423dec7519f3618cdbd580c816a41a86bffe6f544fe6e6c90b0891ab319effe/diff:/var/lib/docker/overlay2/b78015fe190a7617cff46e43a3b7a90d608036971a3c758aab0d4c814064c775/diff:/var/lib/docker/overlay2/f1c7b371c8afb5d9df1ad0b6bcf5860b7d0931bc04f95f00c2f7dc66076996d6/diff:/var/lib/docker/overlay2/68d4abf197eeabb5c097584a1527cd6993fb2d55b0fac9957ec46f8412efdf06/diff:/var/lib/docker/overlay2/f08b8daa4f1c25becadfdae5150584a3dd3ac3bf46afaa6e101fe8e0823572f4/diff:/var/lib/docker/overlay2/1965ab77a969620854fa7e23a0c745af7766a48e9ec2abacecc3e064d1c8fa6a/diff:/var/lib/docker/overlay2/e7cbe6b577242fb8b973317eaa8ee217a8a9ee355b88362b66d45d718b3b2c4b/diff:/var/lib/docker/overlay2/c59e06d8f5c93ed9cb94a83e137c16f3dcdea80b9dceccba323b6ddc7543de46/diff:/var/lib/docker/overlay2/d2e3ed906400776c06ca0502e30b187ca7e8cafdf00da3a54c16cd3818f
76bbc/diff:/var/lib/docker/overlay2/8751d7f7a388ed73174c9365d765716ea6d4d513683a025fe6e322b37e0ffa17/diff:/var/lib/docker/overlay2/e19c84986e7254f1600c2a35898ef2158b4e5b77f2ce8cdf017c2f326ffc0491/diff:/var/lib/docker/overlay2/3dc4411ebe2379955bd8260b29d8faa36b7e965e38b15b19cc65ad0a63e431f6/diff:/var/lib/docker/overlay2/2cae1638c524a830e44f0cb4b8db0e6063415a57346d1d190e50edea3c78df73/diff:/var/lib/docker/overlay2/9c15e8e15ab0ee2a47827fef4273bd0d4ffc315726879f2f422a01be6116fcb2/diff:/var/lib/docker/overlay2/d39456e34bd05af837a974416337cc6b9f6ea243f25e9033212a340da93d3154/diff:/var/lib/docker/overlay2/c0101867e0d0e0ff5aaf7104e95cb6cab78625c9cd8697d2a4f28fff809159ff/diff:/var/lib/docker/overlay2/f1c53d89ed6960deaee63188b5bffd5f88edaf3546c4312205f3b465f7bca9b5/diff:/var/lib/docker/overlay2/2685ce865e736b98fc7e2e1447bdbd580080188c81a14278cf54b8e8dedbf1d9/diff:/var/lib/docker/overlay2/985637e295ac0794f3d93fd241c0526bb5ac4c727f5680fc30c1ed3dde3598ae/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e02d51437278c1adbef40d871f899875e3e8dd4b865ad6848e0fe12c0fe8bad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e02d51437278c1adbef40d871f899875e3e8dd4b865ad6848e0fe12c0fe8bad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e02d51437278c1adbef40d871f899875e3e8dd4b865ad6848e0fe12c0fe8bad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-20220202220601-386638",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-20220202220601-386638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-20220202220601-386638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-20220202220601-386638",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-20220202220601-386638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c5a57abe26e8e2e7ef92558849ca37bf777deafe45685cb72bfc09c06fa20da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49443"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8c5a57abe26e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-20220202220601-386638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "57cc0b62ca14",
	                        "NoKubernetes-20220202220601-386638"
	                    ],
	                    "NetworkID": "03b1f96f62f3e8cf8e5133c7dba7223807deb98b57095c6644fecb61049bd395",
	                    "EndpointID": "8a5184d4828219423c258e515d9b62378354f978b3ae00bdabfe9fa5edbb4907",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-20220202220601-386638 -n NoKubernetes-20220202220601-386638
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-20220202220601-386638 -n NoKubernetes-20220202220601-386638: exit status 6 (413.583425ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0202 22:11:01.575828  604612 status.go:413] kubeconfig endpoint: extract IP: "NoKubernetes-20220202220601-386638" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "NoKubernetes-20220202220601-386638" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (4.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (524.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m44.833274504s)

                                                
                                                
-- stdout --
	* [calico-20220202220909-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20220202220909-386638 in cluster calico-20220202220909-386638
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 22:21:05.878035  703886 out.go:297] Setting OutFile to fd 1 ...
	I0202 22:21:05.878138  703886 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 22:21:05.878150  703886 out.go:310] Setting ErrFile to fd 2...
	I0202 22:21:05.878155  703886 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 22:21:05.878334  703886 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 22:21:05.878756  703886 out.go:304] Setting JSON to false
	I0202 22:21:05.880397  703886 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":21818,"bootTime":1643818648,"procs":754,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0202 22:21:05.880491  703886 start.go:122] virtualization: kvm guest
	I0202 22:21:05.882713  703886 out.go:176] * [calico-20220202220909-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0202 22:21:05.882896  703886 notify.go:174] Checking for updates...
	I0202 22:21:05.884725  703886 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 22:21:05.886627  703886 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 22:21:05.888612  703886 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 22:21:05.890608  703886 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 22:21:05.893349  703886 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0202 22:21:05.894187  703886 config.go:176] Loaded profile config "cilium-20220202220909-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 22:21:05.894327  703886 config.go:176] Loaded profile config "custom-weave-20220202220909-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 22:21:05.894435  703886 config.go:176] Loaded profile config "kubenet-20220202220909-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 22:21:05.894502  703886 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 22:21:05.957414  703886 docker.go:132] docker version: linux-20.10.12
	I0202 22:21:05.957526  703886 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 22:21:06.093681  703886 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-02 22:21:06.020115182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 22:21:06.093838  703886 docker.go:237] overlay module found
	I0202 22:21:06.095756  703886 out.go:176] * Using the docker driver based on user configuration
	I0202 22:21:06.095782  703886 start.go:281] selected driver: docker
	I0202 22:21:06.095788  703886 start.go:798] validating driver "docker" against <nil>
	I0202 22:21:06.095809  703886 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0202 22:21:06.095863  703886 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0202 22:21:06.095885  703886 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0202 22:21:06.097652  703886 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0202 22:21:06.098296  703886 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 22:21:06.208694  703886 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-02 22:21:06.130745178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 22:21:06.208840  703886 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0202 22:21:06.209012  703886 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0202 22:21:06.209041  703886 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0202 22:21:06.209061  703886 cni.go:93] Creating CNI manager for "calico"
	I0202 22:21:06.209069  703886 start_flags.go:297] Found "Calico" CNI - setting NetworkPlugin=cni
	I0202 22:21:06.209078  703886 start_flags.go:302] config:
	{Name:calico-20220202220909-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220202220909-386638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 22:21:06.212701  703886 out.go:176] * Starting control plane node calico-20220202220909-386638 in cluster calico-20220202220909-386638
	I0202 22:21:06.212750  703886 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 22:21:06.214201  703886 out.go:176] * Pulling base image ...
	I0202 22:21:06.214270  703886 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 22:21:06.214315  703886 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 22:21:06.214335  703886 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 22:21:06.214352  703886 cache.go:57] Caching tarball of preloaded images
	I0202 22:21:06.214654  703886 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0202 22:21:06.214685  703886 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0202 22:21:06.214864  703886 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/config.json ...
	I0202 22:21:06.214905  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/config.json: {Name:mk907d2ef0134e8c3079ebdba21fb5cb37b10ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:06.271005  703886 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 22:21:06.271051  703886 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 22:21:06.271069  703886 cache.go:208] Successfully downloaded all kic artifacts
	I0202 22:21:06.271114  703886 start.go:313] acquiring machines lock for calico-20220202220909-386638: {Name:mk973d8ced42fc7ebed6073617404826dd11d58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0202 22:21:06.271256  703886 start.go:317] acquired machines lock for "calico-20220202220909-386638" in 119.155µs
	I0202 22:21:06.271285  703886 start.go:89] Provisioning new machine with config: &{Name:calico-20220202220909-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220202220909-386638 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 22:21:06.271388  703886 start.go:126] createHost starting for "" (driver="docker")
	I0202 22:21:06.273923  703886 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0202 22:21:06.274223  703886 start.go:160] libmachine.API.Create for "calico-20220202220909-386638" (driver="docker")
	I0202 22:21:06.274269  703886 client.go:168] LocalClient.Create starting
	I0202 22:21:06.274325  703886 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem
	I0202 22:21:06.274376  703886 main.go:130] libmachine: Decoding PEM data...
	I0202 22:21:06.274412  703886 main.go:130] libmachine: Parsing certificate...
	I0202 22:21:06.274500  703886 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem
	I0202 22:21:06.274526  703886 main.go:130] libmachine: Decoding PEM data...
	I0202 22:21:06.274544  703886 main.go:130] libmachine: Parsing certificate...
	I0202 22:21:06.275000  703886 cli_runner.go:133] Run: docker network inspect calico-20220202220909-386638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0202 22:21:06.309815  703886 cli_runner.go:180] docker network inspect calico-20220202220909-386638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0202 22:21:06.309880  703886 network_create.go:254] running [docker network inspect calico-20220202220909-386638] to gather additional debugging logs...
	I0202 22:21:06.309908  703886 cli_runner.go:133] Run: docker network inspect calico-20220202220909-386638
	W0202 22:21:06.346704  703886 cli_runner.go:180] docker network inspect calico-20220202220909-386638 returned with exit code 1
	I0202 22:21:06.346741  703886 network_create.go:257] error running [docker network inspect calico-20220202220909-386638]: docker network inspect calico-20220202220909-386638: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220202220909-386638
	I0202 22:21:06.346756  703886 network_create.go:259] output of [docker network inspect calico-20220202220909-386638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220202220909-386638
	
	** /stderr **
	I0202 22:21:06.346824  703886 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0202 22:21:06.403878  703886 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3937f19ba1b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5d:48:3d:0a}}
	I0202 22:21:06.404665  703886 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-1405a487c285 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:a7:6c:d9:08}}
	I0202 22:21:06.405992  703886 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc00093e300] misses:0}
	I0202 22:21:06.406031  703886 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0202 22:21:06.406050  703886 network_create.go:106] attempt to create docker network calico-20220202220909-386638 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0202 22:21:06.406099  703886 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220202220909-386638
	I0202 22:21:06.496810  703886 network_create.go:90] docker network calico-20220202220909-386638 192.168.67.0/24 created
	I0202 22:21:06.496870  703886 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220202220909-386638" container
	I0202 22:21:06.496934  703886 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0202 22:21:06.541894  703886 cli_runner.go:133] Run: docker volume create calico-20220202220909-386638 --label name.minikube.sigs.k8s.io=calico-20220202220909-386638 --label created_by.minikube.sigs.k8s.io=true
	I0202 22:21:06.580917  703886 oci.go:102] Successfully created a docker volume calico-20220202220909-386638
	I0202 22:21:06.581016  703886 cli_runner.go:133] Run: docker run --rm --name calico-20220202220909-386638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220202220909-386638 --entrypoint /usr/bin/test -v calico-20220202220909-386638:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0202 22:21:12.240954  703886 cli_runner.go:186] Completed: docker run --rm --name calico-20220202220909-386638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220202220909-386638 --entrypoint /usr/bin/test -v calico-20220202220909-386638:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib: (5.659881803s)
	I0202 22:21:12.240991  703886 oci.go:106] Successfully prepared a docker volume calico-20220202220909-386638
	I0202 22:21:12.241027  703886 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 22:21:12.241053  703886 kic.go:179] Starting extracting preloaded images to volume ...
	I0202 22:21:12.241127  703886 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220202220909-386638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0202 22:21:18.256219  703886 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220202220909-386638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (6.015044723s)
	I0202 22:21:18.256258  703886 kic.go:188] duration metric: took 6.015205 seconds to extract preloaded images to volume
	W0202 22:21:18.256299  703886 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0202 22:21:18.256317  703886 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0202 22:21:18.256371  703886 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0202 22:21:18.390443  703886 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220202220909-386638 --name calico-20220202220909-386638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220202220909-386638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220202220909-386638 --network calico-20220202220909-386638 --ip 192.168.67.2 --volume calico-20220202220909-386638:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0202 22:21:18.902784  703886 cli_runner.go:133] Run: docker container inspect calico-20220202220909-386638 --format={{.State.Running}}
	I0202 22:21:18.954868  703886 cli_runner.go:133] Run: docker container inspect calico-20220202220909-386638 --format={{.State.Status}}
	I0202 22:21:18.998346  703886 cli_runner.go:133] Run: docker exec calico-20220202220909-386638 stat /var/lib/dpkg/alternatives/iptables
	I0202 22:21:19.081538  703886 oci.go:281] the created container "calico-20220202220909-386638" has a running status.
	I0202 22:21:19.081578  703886 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa...
	I0202 22:21:19.348995  703886 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0202 22:21:19.534760  703886 cli_runner.go:133] Run: docker container inspect calico-20220202220909-386638 --format={{.State.Status}}
	I0202 22:21:19.606515  703886 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0202 22:21:19.606540  703886 kic_runner.go:114] Args: [docker exec --privileged calico-20220202220909-386638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0202 22:21:19.754279  703886 cli_runner.go:133] Run: docker container inspect calico-20220202220909-386638 --format={{.State.Status}}
	I0202 22:21:19.803983  703886 machine.go:88] provisioning docker machine ...
	I0202 22:21:19.804041  703886 ubuntu.go:169] provisioning hostname "calico-20220202220909-386638"
	I0202 22:21:19.804098  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:19.865758  703886 main.go:130] libmachine: Using SSH client type: native
	I0202 22:21:19.866068  703886 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49529 <nil> <nil>}
	I0202 22:21:19.866105  703886 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220202220909-386638 && echo "calico-20220202220909-386638" | sudo tee /etc/hostname
	I0202 22:21:20.055872  703886 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220202220909-386638
	
	I0202 22:21:20.055958  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:20.100183  703886 main.go:130] libmachine: Using SSH client type: native
	I0202 22:21:20.100324  703886 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49529 <nil> <nil>}
	I0202 22:21:20.100343  703886 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220202220909-386638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220202220909-386638/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220202220909-386638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0202 22:21:20.248125  703886 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0202 22:21:20.248162  703886 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
	I0202 22:21:20.248209  703886 ubuntu.go:177] setting up certificates
	I0202 22:21:20.248221  703886 provision.go:83] configureAuth start
	I0202 22:21:20.248283  703886 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220202220909-386638
	I0202 22:21:20.298460  703886 provision.go:138] copyHostCerts
	I0202 22:21:20.298603  703886 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
	I0202 22:21:20.298620  703886 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
	I0202 22:21:20.298689  703886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
	I0202 22:21:20.298788  703886 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
	I0202 22:21:20.298814  703886 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
	I0202 22:21:20.298847  703886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
	I0202 22:21:20.298922  703886 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
	I0202 22:21:20.298931  703886 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
	I0202 22:21:20.298962  703886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1679 bytes)
	I0202 22:21:20.299016  703886 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.calico-20220202220909-386638 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220202220909-386638]
	I0202 22:21:20.566426  703886 provision.go:172] copyRemoteCerts
	I0202 22:21:20.566490  703886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0202 22:21:20.566541  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:20.600370  703886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa Username:docker}
	I0202 22:21:20.698618  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0202 22:21:20.721451  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0202 22:21:20.767315  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0202 22:21:20.786679  703886 provision.go:86] duration metric: configureAuth took 538.4366ms
	I0202 22:21:20.786720  703886 ubuntu.go:193] setting minikube options for container-runtime
	I0202 22:21:20.786918  703886 config.go:176] Loaded profile config "calico-20220202220909-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 22:21:20.786974  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:20.846838  703886 main.go:130] libmachine: Using SSH client type: native
	I0202 22:21:20.847028  703886 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49529 <nil> <nil>}
	I0202 22:21:20.847048  703886 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0202 22:21:20.994716  703886 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0202 22:21:20.994743  703886 ubuntu.go:71] root file system type: overlay
	I0202 22:21:20.994967  703886 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0202 22:21:20.995039  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:21.061909  703886 main.go:130] libmachine: Using SSH client type: native
	I0202 22:21:21.062091  703886 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49529 <nil> <nil>}
	I0202 22:21:21.062221  703886 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0202 22:21:21.229906  703886 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0202 22:21:21.230008  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:21.269231  703886 main.go:130] libmachine: Using SSH client type: native
	I0202 22:21:21.269398  703886 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49529 <nil> <nil>}
	I0202 22:21:21.269417  703886 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0202 22:21:22.172229  703886 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-02 22:21:21.223960405 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0202 22:21:22.172264  703886 machine.go:91] provisioned docker machine in 2.368241493s
	I0202 22:21:22.172274  703886 client.go:171] LocalClient.Create took 15.89800021s
	I0202 22:21:22.172292  703886 start.go:168] duration metric: libmachine.API.Create for "calico-20220202220909-386638" took 15.898071075s
	I0202 22:21:22.172302  703886 start.go:267] post-start starting for "calico-20220202220909-386638" (driver="docker")
	I0202 22:21:22.172318  703886 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0202 22:21:22.172395  703886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0202 22:21:22.172438  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:22.216177  703886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa Username:docker}
	I0202 22:21:22.319461  703886 ssh_runner.go:195] Run: cat /etc/os-release
	I0202 22:21:22.323142  703886 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0202 22:21:22.323179  703886 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0202 22:21:22.323192  703886 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0202 22:21:22.323201  703886 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0202 22:21:22.323213  703886 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
	I0202 22:21:22.323311  703886 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
	I0202 22:21:22.323440  703886 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem -> 3866382.pem in /etc/ssl/certs
	I0202 22:21:22.323576  703886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0202 22:21:22.331232  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem --> /etc/ssl/certs/3866382.pem (1708 bytes)
	I0202 22:21:22.351517  703886 start.go:270] post-start completed in 179.189465ms
	I0202 22:21:22.351959  703886 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220202220909-386638
	I0202 22:21:22.389658  703886 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/config.json ...
	I0202 22:21:22.389955  703886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 22:21:22.390007  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:22.424910  703886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa Username:docker}
	I0202 22:21:22.519271  703886 start.go:129] duration metric: createHost completed in 16.247870894s
	I0202 22:21:22.519309  703886 start.go:80] releasing machines lock for "calico-20220202220909-386638", held for 16.248035713s
	I0202 22:21:22.519399  703886 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220202220909-386638
	I0202 22:21:22.556609  703886 ssh_runner.go:195] Run: systemctl --version
	I0202 22:21:22.556665  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:22.556938  703886 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0202 22:21:22.556991  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:22.602208  703886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa Username:docker}
	I0202 22:21:22.609199  703886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa Username:docker}
	I0202 22:21:22.713783  703886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0202 22:21:22.724644  703886 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 22:21:22.735526  703886 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0202 22:21:22.735590  703886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0202 22:21:22.745990  703886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0202 22:21:22.765050  703886 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0202 22:21:22.843884  703886 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0202 22:21:22.931598  703886 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0202 22:21:22.945995  703886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0202 22:21:23.029171  703886 ssh_runner.go:195] Run: sudo systemctl start docker
	I0202 22:21:23.039601  703886 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 22:21:23.082132  703886 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0202 22:21:23.132073  703886 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0202 22:21:23.132156  703886 cli_runner.go:133] Run: docker network inspect calico-20220202220909-386638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0202 22:21:23.167671  703886 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0202 22:21:23.171400  703886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 22:21:23.183500  703886 out.go:176]   - kubelet.housekeeping-interval=5m
	I0202 22:21:23.183586  703886 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 22:21:23.183650  703886 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 22:21:23.218094  703886 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 22:21:23.218121  703886 docker.go:537] Images already preloaded, skipping extraction
	I0202 22:21:23.218179  703886 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0202 22:21:23.252457  703886 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0202 22:21:23.252497  703886 cache_images.go:84] Images are preloaded, skipping loading
	I0202 22:21:23.252549  703886 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0202 22:21:23.343820  703886 cni.go:93] Creating CNI manager for "calico"
	I0202 22:21:23.343846  703886 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0202 22:21:23.343862  703886 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220202220909-386638 NodeName:calico-20220202220909-386638 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0202 22:21:23.343987  703886 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220202220909-386638"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0202 22:21:23.344063  703886 kubeadm.go:931] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220202220909-386638 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:calico-20220202220909-386638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0202 22:21:23.344113  703886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0202 22:21:23.351789  703886 binaries.go:44] Found k8s binaries, skipping transfer
	I0202 22:21:23.351862  703886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0202 22:21:23.359012  703886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (402 bytes)
	I0202 22:21:23.373230  703886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0202 22:21:23.387853  703886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0202 22:21:23.401557  703886 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0202 22:21:23.404786  703886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0202 22:21:23.414118  703886 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638 for IP: 192.168.67.2
	I0202 22:21:23.414215  703886 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
	I0202 22:21:23.414262  703886 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
	I0202 22:21:23.414311  703886 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/client.key
	I0202 22:21:23.414324  703886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/client.crt with IP's: []
	I0202 22:21:23.667377  703886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/client.crt ...
	I0202 22:21:23.667412  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/client.crt: {Name:mk91f59951b2fe3d583b5d5046f88d76a445257d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:23.667642  703886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/client.key ...
	I0202 22:21:23.667660  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/client.key: {Name:mk10e2773b7b6e4f2b1a9a2bf34bc5ed41531d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:23.667782  703886 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.key.c7fa3a9e
	I0202 22:21:23.667803  703886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0202 22:21:23.869888  703886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.crt.c7fa3a9e ...
	I0202 22:21:23.869932  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.crt.c7fa3a9e: {Name:mkeaad4f849da2c32de50ff09937ecd682fe226c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:23.870125  703886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.key.c7fa3a9e ...
	I0202 22:21:23.870144  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.key.c7fa3a9e: {Name:mke996c25df3dd10eaae72ebc9af0b4c87166f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:23.870246  703886 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.crt
	I0202 22:21:23.870314  703886 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.key
	I0202 22:21:23.870377  703886 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.key
	I0202 22:21:23.870394  703886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.crt with IP's: []
	I0202 22:21:23.977779  703886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.crt ...
	I0202 22:21:23.977819  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.crt: {Name:mk6f8fe3ecab4cc71a6426806ad839bedb3fb425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:23.978019  703886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.key ...
	I0202 22:21:23.978037  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.key: {Name:mk887114ce644ebe3353fbdeb8738ec0c64781c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:23.978242  703886 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/386638.pem (1338 bytes)
	W0202 22:21:23.978289  703886 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/386638_empty.pem, impossibly tiny 0 bytes
	I0202 22:21:23.978305  703886 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1679 bytes)
	I0202 22:21:23.978335  703886 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
	I0202 22:21:23.978407  703886 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
	I0202 22:21:23.978454  703886 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1679 bytes)
	I0202 22:21:23.978510  703886 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem (1708 bytes)
	I0202 22:21:23.979733  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0202 22:21:23.998515  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0202 22:21:24.016184  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0202 22:21:24.034903  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/calico-20220202220909-386638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0202 22:21:24.054003  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0202 22:21:24.072962  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0202 22:21:24.092532  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0202 22:21:24.112007  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0202 22:21:24.131661  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0202 22:21:24.152078  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/386638.pem --> /usr/share/ca-certificates/386638.pem (1338 bytes)
	I0202 22:21:24.169912  703886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/3866382.pem --> /usr/share/ca-certificates/3866382.pem (1708 bytes)
	I0202 22:21:24.188538  703886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0202 22:21:24.203249  703886 ssh_runner.go:195] Run: openssl version
	I0202 22:21:24.209511  703886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0202 22:21:24.220289  703886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0202 22:21:24.224783  703886 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  2 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0202 22:21:24.224848  703886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0202 22:21:24.231754  703886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0202 22:21:24.242331  703886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386638.pem && ln -fs /usr/share/ca-certificates/386638.pem /etc/ssl/certs/386638.pem"
	I0202 22:21:24.251981  703886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386638.pem
	I0202 22:21:24.256268  703886 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  2 21:47 /usr/share/ca-certificates/386638.pem
	I0202 22:21:24.256339  703886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386638.pem
	I0202 22:21:24.262357  703886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386638.pem /etc/ssl/certs/51391683.0"
	I0202 22:21:24.269834  703886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3866382.pem && ln -fs /usr/share/ca-certificates/3866382.pem /etc/ssl/certs/3866382.pem"
	I0202 22:21:24.277170  703886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3866382.pem
	I0202 22:21:24.280228  703886 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  2 21:47 /usr/share/ca-certificates/3866382.pem
	I0202 22:21:24.280283  703886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3866382.pem
	I0202 22:21:24.285028  703886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3866382.pem /etc/ssl/certs/3ec20f2e.0"
	I0202 22:21:24.292308  703886 kubeadm.go:390] StartCluster: {Name:calico-20220202220909-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220202220909-386638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false}
	I0202 22:21:24.292426  703886 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0202 22:21:24.329697  703886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0202 22:21:24.337328  703886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0202 22:21:24.344469  703886 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0202 22:21:24.344514  703886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0202 22:21:24.351503  703886 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0202 22:21:24.351550  703886 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0202 22:21:25.019972  703886 out.go:203]   - Generating certificates and keys ...
	I0202 22:21:28.534996  703886 out.go:203]   - Booting up control plane ...
	I0202 22:21:36.086745  703886 out.go:203]   - Configuring RBAC rules ...
	I0202 22:21:36.500549  703886 cni.go:93] Creating CNI manager for "calico"
	I0202 22:21:36.502702  703886 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0202 22:21:36.502899  703886 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
	I0202 22:21:36.502921  703886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0202 22:21:36.519037  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0202 22:21:38.147000  703886 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.627916439s)
	I0202 22:21:38.147055  703886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0202 22:21:38.147136  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:38.147145  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82 minikube.k8s.io/name=calico-20220202220909-386638 minikube.k8s.io/updated_at=2022_02_02T22_21_38_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:38.258096  703886 ops.go:34] apiserver oom_adj: -16
	I0202 22:21:38.258182  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:38.869317  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:39.369326  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:39.869116  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:40.368750  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:40.869701  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:41.369093  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:41.869124  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:42.369629  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:42.869400  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:43.369121  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:43.868828  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:44.369614  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:44.869333  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:45.368680  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:45.868718  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:46.369123  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:46.869372  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:47.369003  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:47.869278  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:48.368786  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:48.869127  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:49.369304  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:49.869254  703886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0202 22:21:49.943406  703886 kubeadm.go:1007] duration metric: took 11.796324869s to wait for elevateKubeSystemPrivileges.
	I0202 22:21:49.943445  703886 kubeadm.go:392] StartCluster complete in 25.651145207s
	I0202 22:21:49.943467  703886 settings.go:142] acquiring lock: {Name:mkc564df8104e4c2326cd37cd909420c5fd7241d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:49.943582  703886 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 22:21:49.945229  703886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mkd9197ef7cab52290ec1513b45875905284aec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0202 22:21:50.464761  703886 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220202220909-386638" rescaled to 1
	I0202 22:21:50.464864  703886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0202 22:21:50.464886  703886 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0202 22:21:50.464935  703886 addons.go:65] Setting storage-provisioner=true in profile "calico-20220202220909-386638"
	I0202 22:21:50.464858  703886 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0202 22:21:50.464958  703886 addons.go:153] Setting addon storage-provisioner=true in "calico-20220202220909-386638"
	W0202 22:21:50.464964  703886 addons.go:165] addon storage-provisioner should already be in state true
	I0202 22:21:50.464966  703886 addons.go:65] Setting default-storageclass=true in profile "calico-20220202220909-386638"
	I0202 22:21:50.464988  703886 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220202220909-386638"
	I0202 22:21:50.464990  703886 host.go:66] Checking if "calico-20220202220909-386638" exists ...
	I0202 22:21:50.465114  703886 config.go:176] Loaded profile config "calico-20220202220909-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 22:21:50.468306  703886 out.go:176] * Verifying Kubernetes components...
	I0202 22:21:50.465316  703886 cli_runner.go:133] Run: docker container inspect calico-20220202220909-386638 --format={{.State.Status}}
	I0202 22:21:50.468432  703886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 22:21:50.465426  703886 cli_runner.go:133] Run: docker container inspect calico-20220202220909-386638 --format={{.State.Status}}
	I0202 22:21:50.519743  703886 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0202 22:21:50.519908  703886 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 22:21:50.519918  703886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0202 22:21:50.519967  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:50.525210  703886 addons.go:153] Setting addon default-storageclass=true in "calico-20220202220909-386638"
	W0202 22:21:50.525247  703886 addons.go:165] addon default-storageclass should already be in state true
	I0202 22:21:50.525279  703886 host.go:66] Checking if "calico-20220202220909-386638" exists ...
	I0202 22:21:50.525810  703886 cli_runner.go:133] Run: docker container inspect calico-20220202220909-386638 --format={{.State.Status}}
	I0202 22:21:50.546747  703886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0202 22:21:50.549377  703886 node_ready.go:35] waiting up to 5m0s for node "calico-20220202220909-386638" to be "Ready" ...
	I0202 22:21:50.553827  703886 node_ready.go:49] node "calico-20220202220909-386638" has status "Ready":"True"
	I0202 22:21:50.553856  703886 node_ready.go:38] duration metric: took 4.451582ms waiting for node "calico-20220202220909-386638" to be "Ready" ...
	I0202 22:21:50.553866  703886 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 22:21:50.567862  703886 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace to be "Ready" ...
	I0202 22:21:50.589726  703886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa Username:docker}
	I0202 22:21:50.598316  703886 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0202 22:21:50.598350  703886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0202 22:21:50.598402  703886 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220202220909-386638
	I0202 22:21:50.653428  703886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/calico-20220202220909-386638/id_rsa Username:docker}
	I0202 22:21:50.731106  703886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0202 22:21:50.843275  703886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0202 22:21:52.618504  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:21:52.623420  703886 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.076631861s)
	I0202 22:21:52.623456  703886 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0202 22:21:52.713724  703886 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.870408721s)
	I0202 22:21:52.713783  703886 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.982648698s)
	I0202 22:21:52.716402  703886 out.go:176] * Enabled addons: default-storageclass, storage-provisioner
	I0202 22:21:52.716445  703886 addons.go:417] enableAddons completed in 2.251563085s
	I0202 22:21:55.111281  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:21:57.585432  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:21:59.586976  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:02.086450  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:04.112594  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:06.113564  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:08.611291  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:11.110290  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:13.111369  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:15.584576  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:17.610806  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:19.613303  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:22.086160  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:24.609618  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:27.085865  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:29.107700  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:31.111953  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:33.112257  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:35.585259  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:37.585546  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:39.611293  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:41.611617  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:44.085736  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:46.612060  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:48.615485  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:51.085324  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:53.085378  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:55.109667  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:57.111173  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:22:59.112227  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:01.585345  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:03.611762  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:06.084350  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:08.084668  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:10.112840  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:12.584692  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:14.585252  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:16.608504  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:18.610447  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:21.085008  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:23.112522  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:25.584608  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:27.611248  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:29.612007  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:32.085295  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:34.112390  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:36.585205  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:38.586120  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:40.609374  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:42.611312  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:44.611709  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:46.611975  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:49.085306  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:51.086169  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:53.112111  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:55.585303  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:23:58.112662  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:00.613039  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:03.112251  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:05.610177  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:07.611139  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:09.611446  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:12.112583  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:14.584461  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:16.584887  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:18.610464  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:20.610765  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:22.611130  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:25.110521  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:27.610290  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:29.611856  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:32.085337  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:34.111878  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:36.585095  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:38.585139  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:40.585396  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:42.611930  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:45.084990  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:47.085596  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:49.113791  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:51.585086  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:54.109390  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:56.110711  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:24:58.111818  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:00.610723  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:03.110652  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:05.612161  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:08.085107  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:10.112706  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:12.585903  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:15.112047  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:17.611135  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:19.611562  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:21.611782  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:23.612441  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:26.110659  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:28.112537  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:30.585136  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:32.612436  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:35.085328  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:37.112095  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:39.585858  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:41.586006  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:43.609956  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:46.110172  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:48.111088  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:50.112930  703886 pod_ready.go:102] pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:50.588122  703886 pod_ready.go:81] duration metric: took 4m0.020188498s waiting for pod "calico-kube-controllers-8594699699-zs74l" in "kube-system" namespace to be "Ready" ...
	E0202 22:25:50.588148  703886 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0202 22:25:50.588180  703886 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-qbvgm" in "kube-system" namespace to be "Ready" ...
	I0202 22:25:52.597642  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:54.597853  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:56.598508  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:25:58.599977  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:00.611258  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:03.098393  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:05.611332  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:08.098306  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:10.113527  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:12.598021  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:14.598174  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:16.614158  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:19.098985  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:21.612983  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:24.111537  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:26.597775  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:28.610054  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:30.611610  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:33.112497  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:35.598380  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:37.611137  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:39.612979  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:42.112456  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:44.113132  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:46.598421  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:49.111864  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:51.112777  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:53.611085  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:55.611644  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:26:57.612971  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:00.111432  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:02.613010  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:05.112835  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:07.598295  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:09.598488  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:11.612414  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:14.098631  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:16.612073  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:19.110418  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:21.112971  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:23.598597  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:25.612363  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:27.612610  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:30.113451  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:32.597806  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:34.598069  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:36.612567  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:39.098142  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:41.111135  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:43.112791  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:45.612922  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:48.098343  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:50.111685  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:52.612061  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:54.612676  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:57.112717  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:27:59.598814  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:02.112838  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:04.598022  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:07.112811  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:09.611550  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:11.612724  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:14.098029  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:16.111954  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:18.598116  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:20.612483  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:23.111871  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:25.610720  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:27.614541  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:30.113519  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:32.599271  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:34.611962  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:37.098332  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:39.111576  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:41.598151  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:43.612999  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:46.098196  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:48.098533  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:50.112775  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:52.609236  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:54.611807  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:56.612954  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:28:59.111966  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:01.599021  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:04.112639  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:06.112888  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:08.113132  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:10.611740  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:12.612572  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:15.097950  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:17.112926  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:19.598253  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:21.598362  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:23.612194  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:26.097487  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:28.099283  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:30.111169  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:32.111913  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:34.598230  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:36.612532  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:38.612894  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:41.109027  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:43.111327  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:45.112526  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:47.598693  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:49.612668  703886 pod_ready.go:102] pod "calico-node-qbvgm" in "kube-system" namespace has status "Ready":"False"
	I0202 22:29:50.615786  703886 pod_ready.go:81] duration metric: took 4m0.027579283s waiting for pod "calico-node-qbvgm" in "kube-system" namespace to be "Ready" ...
	E0202 22:29:50.615817  703886 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0202 22:29:50.615845  703886 pod_ready.go:38] duration metric: took 8m0.061965981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0202 22:29:50.619038  703886 out.go:176] 
	W0202 22:29:50.619328  703886 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0202 22:29:50.619378  703886 out.go:241] * 
	* 
	W0202 22:29:50.620400  703886 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0202 22:29:50.622641  703886 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (524.86s)

                                                
                                    

Test pass (268/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.53
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.23.2/json-events 5.02
11 TestDownloadOnly/v1.23.2/preload-exists 0
15 TestDownloadOnly/v1.23.2/LogsDuration 0.07
17 TestDownloadOnly/v1.23.3-rc.0/json-events 4.87
18 TestDownloadOnly/v1.23.3-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.3-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.32
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
25 TestDownloadOnlyKic 10.15
26 TestBinaryMirror 0.85
27 TestOffline 77.38
29 TestAddons/Setup 126.78
31 TestAddons/parallel/Registry 12.87
32 TestAddons/parallel/Ingress 43.8
33 TestAddons/parallel/MetricsServer 5.66
34 TestAddons/parallel/HelmTiller 9.48
36 TestAddons/parallel/CSI 64.06
38 TestAddons/serial/GCPAuth 40.39
39 TestAddons/StoppedEnableDisable 11.25
40 TestCertOptions 33.84
41 TestCertExpiration 218.7
42 TestDockerFlags 27.41
43 TestForceSystemdFlag 38.12
44 TestForceSystemdEnv 33
45 TestKVMDriverInstallOrUpdate 4.06
49 TestErrorSpam/setup 26.23
50 TestErrorSpam/start 0.85
51 TestErrorSpam/status 1.11
52 TestErrorSpam/pause 1.44
53 TestErrorSpam/unpause 1.52
54 TestErrorSpam/stop 10.94
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 43.06
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 5.01
61 TestFunctional/serial/KubeContext 0.03
62 TestFunctional/serial/KubectlGetPods 0.16
65 TestFunctional/serial/CacheCmd/cache/add_remote 2.66
66 TestFunctional/serial/CacheCmd/cache/add_local 1.49
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
70 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
71 TestFunctional/serial/CacheCmd/cache/delete 0.12
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 24.54
76 TestFunctional/serial/LogsCmd 1.24
77 TestFunctional/serial/LogsFileCmd 1.25
79 TestFunctional/parallel/ConfigCmd 0.45
80 TestFunctional/parallel/DashboardCmd 5.15
81 TestFunctional/parallel/DryRun 0.58
82 TestFunctional/parallel/InternationalLanguage 0.42
83 TestFunctional/parallel/StatusCmd 1.43
86 TestFunctional/parallel/ServiceCmd 14.2
87 TestFunctional/parallel/AddonsCmd 0.17
88 TestFunctional/parallel/PersistentVolumeClaim 39.68
90 TestFunctional/parallel/SSHCmd 0.81
91 TestFunctional/parallel/CpCmd 1.68
92 TestFunctional/parallel/MySQL 21.65
93 TestFunctional/parallel/FileSync 0.36
94 TestFunctional/parallel/CertSync 2.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
102 TestFunctional/parallel/Version/short 0.06
103 TestFunctional/parallel/Version/components 0.62
104 TestFunctional/parallel/DockerEnv/bash 1.56
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
109 TestFunctional/parallel/ImageCommands/ImageBuild 4.84
110 TestFunctional/parallel/ImageCommands/Setup 1.16
111 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.23
116 TestFunctional/parallel/ProfileCmd/profile_list 0.55
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.06
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.12
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.57
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.27
128 TestFunctional/parallel/MountCmd/any-port 6.56
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.72
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.66
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
135 TestFunctional/parallel/MountCmd/specific-port 2.47
136 TestFunctional/delete_addon-resizer_images 0.1
137 TestFunctional/delete_my-image_image 0.03
138 TestFunctional/delete_minikube_cached_images 0.03
141 TestIngressAddonLegacy/StartLegacyK8sCluster 51.7
143 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.16
144 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.39
145 TestIngressAddonLegacy/serial/ValidateIngressAddons 62.2
148 TestJSONOutput/start/Command 41.31
149 TestJSONOutput/start/Audit 0
151 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/pause/Command 0.65
155 TestJSONOutput/pause/Audit 0
157 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/unpause/Command 0.63
161 TestJSONOutput/unpause/Audit 0
163 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/stop/Command 10.89
167 TestJSONOutput/stop/Audit 0
169 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
171 TestErrorJSONOutput 0.28
173 TestKicCustomNetwork/create_custom_network 28.71
174 TestKicCustomNetwork/use_default_bridge_network 29.1
175 TestKicExistingNetwork 28.63
176 TestMainNoArgs 0.06
179 TestMountStart/serial/StartWithMountFirst 5.9
180 TestMountStart/serial/VerifyMountFirst 0.33
181 TestMountStart/serial/StartWithMountSecond 5.54
182 TestMountStart/serial/VerifyMountSecond 0.32
183 TestMountStart/serial/DeleteFirst 1.73
184 TestMountStart/serial/VerifyMountPostDelete 0.34
185 TestMountStart/serial/Stop 1.27
186 TestMountStart/serial/RestartStopped 6.82
187 TestMountStart/serial/VerifyMountPostStop 0.32
190 TestMultiNode/serial/FreshStart2Nodes 75.36
191 TestMultiNode/serial/DeployApp2Nodes 3.65
192 TestMultiNode/serial/PingHostFrom2Pods 0.82
193 TestMultiNode/serial/AddNode 27.82
194 TestMultiNode/serial/ProfileList 0.37
195 TestMultiNode/serial/CopyFile 12.04
196 TestMultiNode/serial/StopNode 2.51
197 TestMultiNode/serial/StartAfterStop 24.64
198 TestMultiNode/serial/RestartKeepsNodes 131.97
199 TestMultiNode/serial/DeleteNode 5.3
200 TestMultiNode/serial/StopMultiNode 21.71
201 TestMultiNode/serial/RestartMultiNode 58.14
202 TestMultiNode/serial/ValidateNameConflict 29.26
207 TestPreload 119.56
209 TestScheduledStopUnix 100.19
210 TestSkaffold 67.58
212 TestInsufficientStorage 14.79
213 TestRunningBinaryUpgrade 104.28
215 TestKubernetesUpgrade 160.14
216 TestMissingContainerUpgrade 101.52
217 TestStoppedBinaryUpgrade/Setup 0.41
219 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
220 TestNoKubernetes/serial/StartWithK8s 67.19
221 TestStoppedBinaryUpgrade/Upgrade 84.69
222 TestNoKubernetes/serial/StartWithStopK8s 21.07
231 TestPause/serial/Start 61.85
232 TestStoppedBinaryUpgrade/MinikubeLogs 2.79
233 TestNoKubernetes/serial/Start 203.65
234 TestPause/serial/SecondStartNoReconfiguration 5.13
235 TestPause/serial/Pause 0.65
236 TestPause/serial/VerifyStatus 0.39
237 TestPause/serial/Unpause 0.63
238 TestPause/serial/PauseAgain 0.87
239 TestPause/serial/DeletePaused 2.48
240 TestPause/serial/VerifyDeletedResources 0.61
253 TestStartStop/group/old-k8s-version/serial/FirstStart 122.34
255 TestStartStop/group/no-preload/serial/FirstStart 62.56
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
257 TestNoKubernetes/serial/ProfileList 2.09
258 TestNoKubernetes/serial/Stop 1.34
261 TestStartStop/group/embed-certs/serial/FirstStart 48.35
262 TestStartStop/group/no-preload/serial/DeployApp 8.33
263 TestStartStop/group/embed-certs/serial/DeployApp 9.41
264 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.59
265 TestStartStop/group/no-preload/serial/Stop 10.86
266 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.65
267 TestStartStop/group/embed-certs/serial/Stop 10.9
268 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
269 TestStartStop/group/no-preload/serial/SecondStart 336.76
270 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
271 TestStartStop/group/embed-certs/serial/SecondStart 339.7
272 TestStartStop/group/old-k8s-version/serial/DeployApp 8.48
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.55
274 TestStartStop/group/old-k8s-version/serial/Stop 10.92
275 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
276 TestStartStop/group/old-k8s-version/serial/SecondStart 412.59
278 TestStartStop/group/default-k8s-different-port/serial/FirstStart 44.68
279 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.44
280 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.58
281 TestStartStop/group/default-k8s-different-port/serial/Stop 10.87
282 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.21
283 TestStartStop/group/default-k8s-different-port/serial/SecondStart 340.63
284 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
285 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.19
286 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
287 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
288 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
289 TestStartStop/group/no-preload/serial/Pause 3.26
290 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
291 TestStartStop/group/embed-certs/serial/Pause 3.5
293 TestStartStop/group/newest-cni/serial/FirstStart 41.05
294 TestNetworkPlugins/group/auto/Start 42.85
295 TestStartStop/group/newest-cni/serial/DeployApp 0
296 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
297 TestStartStop/group/newest-cni/serial/Stop 10.77
298 TestNetworkPlugins/group/auto/KubeletFlags 0.43
299 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
300 TestNetworkPlugins/group/auto/NetCatPod 11.18
301 TestStartStop/group/newest-cni/serial/SecondStart 20.05
302 TestNetworkPlugins/group/auto/DNS 0.17
303 TestNetworkPlugins/group/auto/Localhost 0.18
304 TestNetworkPlugins/group/auto/HairPin 5.15
305 TestNetworkPlugins/group/false/Start 55.3
306 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
309 TestStartStop/group/newest-cni/serial/Pause 3.29
310 TestNetworkPlugins/group/kindnet/Start 59.67
311 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.02
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
314 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.07
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.42
316 TestStartStop/group/old-k8s-version/serial/Pause 3.59
317 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.46
318 TestStartStop/group/default-k8s-different-port/serial/Pause 3.86
319 TestNetworkPlugins/group/enable-default-cni/Start 47.7
320 TestNetworkPlugins/group/bridge/Start 48.2
321 TestNetworkPlugins/group/false/KubeletFlags 0.43
322 TestNetworkPlugins/group/false/NetCatPod 11.23
323 TestNetworkPlugins/group/false/DNS 0.16
324 TestNetworkPlugins/group/false/Localhost 0.14
325 TestNetworkPlugins/group/false/HairPin 5.15
326 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
327 TestNetworkPlugins/group/kubenet/Start 49.39
328 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
329 TestNetworkPlugins/group/kindnet/NetCatPod 15.25
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.44
333 TestNetworkPlugins/group/bridge/NetCatPod 11.32
334 TestNetworkPlugins/group/kindnet/DNS 0.19
335 TestNetworkPlugins/group/kindnet/Localhost 0.17
336 TestNetworkPlugins/group/kindnet/HairPin 0.16
337 TestNetworkPlugins/group/cilium/Start 74.3
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
341 TestNetworkPlugins/group/bridge/DNS 0.17
342 TestNetworkPlugins/group/bridge/Localhost 0.15
343 TestNetworkPlugins/group/bridge/HairPin 0.14
344 TestNetworkPlugins/group/custom-weave/Start 58.79
346 TestNetworkPlugins/group/kubenet/KubeletFlags 0.43
347 TestNetworkPlugins/group/kubenet/NetCatPod 11.21
348 TestNetworkPlugins/group/kubenet/DNS 0.18
349 TestNetworkPlugins/group/kubenet/Localhost 0.14
350 TestNetworkPlugins/group/kubenet/HairPin 0.15
351 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.45
352 TestNetworkPlugins/group/custom-weave/NetCatPod 11.21
353 TestNetworkPlugins/group/cilium/ControllerPod 5.02
354 TestNetworkPlugins/group/cilium/KubeletFlags 0.41
355 TestNetworkPlugins/group/cilium/NetCatPod 11
356 TestNetworkPlugins/group/cilium/DNS 0.17
357 TestNetworkPlugins/group/cilium/Localhost 0.15
358 TestNetworkPlugins/group/cilium/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (4.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220202214154-386638 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220202214154-386638 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.533366642s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220202214154-386638
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220202214154-386638: exit status 85 (71.444463ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 21:41:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220202214154-386638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220202214154-386638 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220202214154-386638 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.024177982s)
--- PASS: TestDownloadOnly/v1.23.2/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/preload-exists
--- PASS: TestDownloadOnly/v1.23.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220202214154-386638
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220202214154-386638: exit status 85 (72.425008ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 21:41:59
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0202 21:41:59.389351  386799 out.go:297] Setting OutFile to fd 1 ...
	I0202 21:41:59.389429  386799 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:41:59.389458  386799 out.go:310] Setting ErrFile to fd 2...
	I0202 21:41:59.389461  386799 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:41:59.389559  386799 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	W0202 21:41:59.389672  386799 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/config/config.json: no such file or directory
	I0202 21:41:59.389798  386799 out.go:304] Setting JSON to true
	I0202 21:41:59.390788  386799 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":19472,"bootTime":1643818648,"procs":346,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0202 21:41:59.390874  386799 start.go:122] virtualization: kvm guest
	I0202 21:41:59.393567  386799 notify.go:174] Checking for updates...
	I0202 21:41:59.395814  386799 config.go:176] Loaded profile config "download-only-20220202214154-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0202 21:41:59.395867  386799 start.go:706] api.Load failed for download-only-20220202214154-386638: filestore "download-only-20220202214154-386638": Docker machine "download-only-20220202214154-386638" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0202 21:41:59.395920  386799 driver.go:344] Setting default libvirt URI to qemu:///system
	W0202 21:41:59.395944  386799 start.go:706] api.Load failed for download-only-20220202214154-386638: filestore "download-only-20220202214154-386638": Docker machine "download-only-20220202214154-386638" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0202 21:41:59.433823  386799 docker.go:132] docker version: linux-20.10.12
	I0202 21:41:59.433970  386799 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 21:41:59.520508  386799 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-02-02 21:41:59.462665178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 21:41:59.520606  386799 docker.go:237] overlay module found
	I0202 21:41:59.522882  386799 start.go:281] selected driver: docker
	I0202 21:41:59.522897  386799 start.go:798] validating driver "docker" against &{Name:download-only-20220202214154-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220202214154-386638 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 21:41:59.523125  386799 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 21:41:59.604917  386799 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-02-02 21:41:59.549217103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 21:41:59.605515  386799 cni.go:93] Creating CNI manager for ""
	I0202 21:41:59.605535  386799 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0202 21:41:59.605544  386799 start_flags.go:302] config:
	{Name:download-only-20220202214154-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:download-only-20220202214154-386638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 21:41:59.607689  386799 cache.go:120] Beginning downloading kic base image for docker with docker
	I0202 21:41:59.609348  386799 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 21:41:59.609447  386799 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0202 21:41:59.649524  386799 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0202 21:41:59.649551  386799 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0202 21:41:59.672611  386799 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 21:41:59.672641  386799 cache.go:57] Caching tarball of preloaded images
	I0202 21:41:59.672970  386799 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 21:41:59.675020  386799 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 ...
	I0202 21:41:59.743237  386799 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4?checksum=md5:6fa926c88a747ae43bb3bda5a3741fe2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0202 21:42:02.758514  386799 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 ...
	I0202 21:42:02.758641  386799 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 ...
	I0202 21:42:03.790637  386799 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0202 21:42:03.790801  386799 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/download-only-20220202214154-386638/config.json ...
	I0202 21:42:03.791006  386799 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0202 21:42:03.791288  386799 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/linux/v1.23.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220202214154-386638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/json-events (4.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220202214154-386638 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220202214154-386638 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.870103627s)
--- PASS: TestDownloadOnly/v1.23.3-rc.0/json-events (4.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.3-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220202214154-386638
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220202214154-386638: exit status 85 (71.378357ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/02 21:42:04
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220202214154-386638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220202214154-386638
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (10.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220202214210-386638 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:230: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220202214210-386638 --force --alsologtostderr --driver=docker  --container-runtime=docker: (8.950393746s)
helpers_test.go:176: Cleaning up "download-docker-20220202214210-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220202214210-386638
--- PASS: TestDownloadOnlyKic (10.15s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220202214220-386638 --alsologtostderr --binary-mirror http://127.0.0.1:39505 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-20220202214220-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220202214220-386638
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (77.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220202220601-386638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220202220601-386638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m14.632189141s)
helpers_test.go:176: Cleaning up "offline-docker-20220202220601-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220202220601-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220202220601-386638: (2.745587579s)
--- PASS: TestOffline (77.38s)

                                                
                                    
x
+
TestAddons/Setup (126.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220202214221-386638 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220202214221-386638 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m6.779389434s)
--- PASS: TestAddons/Setup (126.78s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 13.632578ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-d8cts" [be4ac6e1-38fa-47de-822d-5e708b72bde1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008123139s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-r9x6j" [39ce3b62-9a71-4a0b-af23-db4d724782ec] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007068385s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220202214221-386638 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220202214221-386638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:296: (dbg) Done: kubectl --context addons-20220202214221-386638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.159958177s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 ip
2022/02/02 21:44:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (12.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (43.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220202214221-386638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220202214221-386638 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:183: (dbg) Done: kubectl --context addons-20220202214221-386638 replace --force -f testdata/nginx-ingress-v1.yaml: (1.071012557s)
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220202214221-386638 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [03fd6ac3-9a63-4215-a447-fad69302b2f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [03fd6ac3-9a63-4215-a447-fad69302b2f6] Running
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006759811s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20220202214221-386638 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable ingress-dns --alsologtostderr -v=1: (1.789842333s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable ingress --alsologtostderr -v=1: (28.885748396s)
--- PASS: TestAddons/parallel/Ingress (43.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 12.519158ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-6b76bd68b6-lxtsr" [8c2b6814-9dc8-44bf-84b5-ae1164efaa8b] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008954211s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220202214221-386638 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 17.094044ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-vddpd" [3c4c12ff-f630-4acf-9c3e-7842d0418997] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00706233s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220202214221-386638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220202214221-386638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.117606644s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 18.298411ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220202214221-386638 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220202214221-386638 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220202214221-386638 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [929c7f43-3a94-4281-a2f6-77a2dd131d05] Pending
helpers_test.go:343: "task-pv-pod" [929c7f43-3a94-4281-a2f6-77a2dd131d05] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [929c7f43-3a94-4281-a2f6-77a2dd131d05] Running
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 28.007213767s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220202214221-386638 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220202214221-386638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220202214221-386638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220202214221-386638 delete pod task-pv-pod
addons_test.go:545: (dbg) Done: kubectl --context addons-20220202214221-386638 delete pod task-pv-pod: (1.347857528s)
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220202214221-386638 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220202214221-386638 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220202214221-386638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220202214221-386638 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [2bf235f9-28a2-4cd5-9a2e-6b69903bdcfc] Pending
helpers_test.go:343: "task-pv-pod-restore" [2bf235f9-28a2-4cd5-9a2e-6b69903bdcfc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [2bf235f9-28a2-4cd5-9a2e-6b69903bdcfc] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 23.005466425s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220202214221-386638 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220202214221-386638 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220202214221-386638 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.190685513s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (40.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220202214221-386638 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [39cc5d43-1681-4ac5-9f57-340df4640b0a] Pending
helpers_test.go:343: "busybox" [39cc5d43-1681-4ac5-9f57-340df4640b0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [39cc5d43-1681-4ac5-9f57-340df4640b0a] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.005934011s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220202214221-386638 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220202214221-386638 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20220202214221-386638 addons disable gcp-auth --alsologtostderr -v=1: (6.076460316s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220202214221-386638 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20220202214221-386638 addons enable gcp-auth: (2.955386575s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20220202214221-386638 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7f8587d5b7-8kllc" [f197155f-2395-4e8f-a7c5-3e6c5f050619] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:343: "private-image-7f8587d5b7-8kllc" [f197155f-2395-4e8f-a7c5-3e6c5f050619] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 12.005738578s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20220202214221-386638 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-869dcfd8c7-smj7v" [2aad4167-4105-4c5d-a7b1-83886c1db5b8] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:343: "private-image-eu-869dcfd8c7-smj7v" [2aad4167-4105-4c5d-a7b1-83886c1db5b8] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.006504825s
--- PASS: TestAddons/serial/GCPAuth (40.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220202214221-386638
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220202214221-386638: (11.064612771s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220202214221-386638
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220202214221-386638
--- PASS: TestAddons/StoppedEnableDisable (11.25s)

                                                
                                    
x
+
TestCertOptions (33.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220202221013-386638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220202221013-386638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (30.193717235s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220202221013-386638 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20220202221013-386638 config view
E0202 22:10:44.002424  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220202221013-386638 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220202221013-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220202221013-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220202221013-386638: (2.80391062s)
--- PASS: TestCertOptions (33.84s)

                                                
                                    
x
+
TestCertExpiration (218.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220202220913-386638 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0202 22:09:27.875191  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220202220913-386638 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (31.675932672s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220202220913-386638 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220202220913-386638 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.435594506s)
helpers_test.go:176: Cleaning up "cert-expiration-20220202220913-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220202220913-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220202220913-386638: (2.586271709s)
--- PASS: TestCertExpiration (218.70s)

                                                
                                    
x
+
TestDockerFlags (27.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220202220945-386638 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0202 22:09:58.661029  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220202220945-386638 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.088034187s)
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220202220945-386638 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220202220945-386638 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20220202220945-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220202220945-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220202220945-386638: (2.560779376s)
--- PASS: TestDockerFlags (27.41s)

                                                
                                    
x
+
TestForceSystemdFlag (38.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220202220831-386638 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220202220831-386638 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.042536409s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220202220831-386638 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20220202220831-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220202220831-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220202220831-386638: (2.597273658s)
--- PASS: TestForceSystemdFlag (38.12s)

                                                
                                    
x
+
TestForceSystemdEnv (33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220202220912-386638 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220202220912-386638 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.000236953s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220202220912-386638 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20220202220912-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220202220912-386638

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220202220912-386638: (2.543703412s)
--- PASS: TestForceSystemdEnv (33.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.06s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.06s)

                                                
                                    
x
+
TestErrorSpam/setup (26.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220202214626-386638 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220202214626-386638 --driver=docker  --container-runtime=docker
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220202214626-386638 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220202214626-386638 --driver=docker  --container-runtime=docker: (26.230443439s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (26.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (10.94s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 stop: (10.68554889s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220202214626-386638 --log_dir /tmp/nospam-20220202214626-386638 stop
--- PASS: TestErrorSpam/stop (10.94s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1715: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/test/nested/copy/386638/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2097: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220202214710-386638 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2097: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220202214710-386638 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (43.061050627s)
--- PASS: TestFunctional/serial/StartWithProxy (43.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220202214710-386638 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220202214710-386638 --alsologtostderr -v=8: (5.004407024s)
functional_test.go:659: soft start took 5.005071241s for "functional-20220202214710-386638" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220202214710-386638 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cache add k8s.gcr.io/pause:3.1
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cache add k8s.gcr.io/pause:3.3
functional_test.go:1050: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 cache add k8s.gcr.io/pause:3.3: (1.150442946s)
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1081: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220202214710-386638 /tmp/functional-20220202214710-3866381563748520
functional_test.go:1093: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cache add minikube-local-cache-test:functional-20220202214710-386638
functional_test.go:1093: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 cache add minikube-local-cache-test:functional-20220202214710-386638: (1.20657855s)
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cache delete minikube-local-cache-test:functional-20220202214710-386638
functional_test.go:1087: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220202214710-386638
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1128: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (347.395236ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cache reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1176: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1176: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 kubectl -- --context functional-20220202214710-386638 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-20220202214710-386638 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (24.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220202214710-386638 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220202214710-386638 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (24.542582366s)
functional_test.go:757: restart took 24.542855169s for "functional-20220202214710-386638" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (24.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 logs
functional_test.go:1240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 logs: (1.239658648s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 logs --file /tmp/functional-20220202214710-3866382522564457/logs.txt
functional_test.go:1257: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 logs --file /tmp/functional-20220202214710-3866382522564457/logs.txt: (1.251146108s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 config get cpus
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 config get cpus: exit status 14 (78.12807ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 config set cpus 2
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 config get cpus
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 config unset cpus
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 config get cpus: exit status 14 (73.971322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220202214710-386638 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:911: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220202214710-386638 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:507: unable to kill pid 424720: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220202214710-386638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220202214710-386638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (255.00613ms)

                                                
                                                
-- stdout --
	* [functional-20220202214710-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 21:48:52.801733  423234 out.go:297] Setting OutFile to fd 1 ...
	I0202 21:48:52.801804  423234 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:48:52.801811  423234 out.go:310] Setting ErrFile to fd 2...
	I0202 21:48:52.801821  423234 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:48:52.801924  423234 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 21:48:52.802126  423234 out.go:304] Setting JSON to false
	I0202 21:48:52.803430  423234 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":19885,"bootTime":1643818648,"procs":556,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0202 21:48:52.803496  423234 start.go:122] virtualization: kvm guest
	I0202 21:48:52.806539  423234 out.go:176] * [functional-20220202214710-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0202 21:48:52.807934  423234 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 21:48:52.809407  423234 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 21:48:52.810763  423234 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 21:48:52.812106  423234 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 21:48:52.813636  423234 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0202 21:48:52.814066  423234 config.go:176] Loaded profile config "functional-20220202214710-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 21:48:52.814430  423234 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 21:48:52.860212  423234 docker.go:132] docker version: linux-20.10.12
	I0202 21:48:52.860288  423234 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 21:48:52.957113  423234 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-02-02 21:48:52.89190908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 21:48:52.957203  423234 docker.go:237] overlay module found
	I0202 21:48:52.978370  423234 out.go:176] * Using the docker driver based on existing profile
	I0202 21:48:52.978400  423234 start.go:281] selected driver: docker
	I0202 21:48:52.978417  423234 start.go:798] validating driver "docker" against &{Name:functional-20220202214710-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220202214710-386638 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 21:48:52.978551  423234 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0202 21:48:52.978622  423234 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0202 21:48:52.978646  423234 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0202 21:48:52.981310  423234 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0202 21:48:52.983501  423234 out.go:176] 
	W0202 21:48:52.983609  423234 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0202 21:48:52.985043  423234 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:992: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220202214710-386638 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220202214710-386638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220202214710-386638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (420.920347ms)

                                                
                                                
-- stdout --
	* [functional-20220202214710-386638] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 21:48:48.092361  421329 out.go:297] Setting OutFile to fd 1 ...
	I0202 21:48:48.092482  421329 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:48:48.092494  421329 out.go:310] Setting ErrFile to fd 2...
	I0202 21:48:48.092499  421329 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:48:48.092680  421329 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 21:48:48.092993  421329 out.go:304] Setting JSON to false
	I0202 21:48:48.094253  421329 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":19880,"bootTime":1643818648,"procs":540,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0202 21:48:48.094328  421329 start.go:122] virtualization: kvm guest
	I0202 21:48:48.097475  421329 out.go:176] * [functional-20220202214710-386638] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	I0202 21:48:48.099143  421329 out.go:176]   - MINIKUBE_LOCATION=13251
	I0202 21:48:48.100611  421329 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0202 21:48:48.102127  421329 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	I0202 21:48:48.103446  421329 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	I0202 21:48:48.104693  421329 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0202 21:48:48.105205  421329 config.go:176] Loaded profile config "functional-20220202214710-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 21:48:48.105665  421329 driver.go:344] Setting default libvirt URI to qemu:///system
	I0202 21:48:48.149875  421329 docker.go:132] docker version: linux-20.10.12
	I0202 21:48:48.149975  421329 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 21:48:48.261150  421329 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-02 21:48:48.186180983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 21:48:48.261289  421329 docker.go:237] overlay module found
	I0202 21:48:48.305486  421329 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0202 21:48:48.305517  421329 start.go:281] selected driver: docker
	I0202 21:48:48.305532  421329 start.go:798] validating driver "docker" against &{Name:functional-20220202214710-386638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220202214710-386638 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0202 21:48:48.305695  421329 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0202 21:48:48.305730  421329 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0202 21:48:48.305763  421329 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0202 21:48:48.417799  421329 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0202 21:48:48.436669  421329 out.go:176] 
	W0202 21:48:48.436878  421329 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0202 21:48:48.445481  421329 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:873: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-20220202214710-386638 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-20220202214710-386638 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-hpc4n" [61f85d7e-3ac3-4969-aa93-54b516ea23dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-hpc4n" [61f85d7e-3ac3-4969-aa93-54b516ea23dc] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.006929014s
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1468: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1477: found endpoint: https://192.168.49.2:31423
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1497: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1503: found endpoint for hello-node: http://192.168.49.2:31423
functional_test.go:1514: Attempting to fetch http://192.168.49.2:31423 ...
functional_test.go:1534: http://192.168.49.2:31423: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-54fbb85-hpc4n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31423
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (14.20s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 addons list
functional_test.go:1561: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [4660a715-0b0d-419a-a7b1-650bf4a8466f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.048649344s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220202214710-386638 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220202214710-386638 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220202214710-386638 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220202214710-386638 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [0685efd6-7f57-4e58-be47-dabe920d070c] Pending
helpers_test.go:343: "sp-pod" [0685efd6-7f57-4e58-be47-dabe920d070c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [0685efd6-7f57-4e58-be47-dabe920d070c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.051218539s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220202214710-386638 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220202214710-386638 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220202214710-386638 delete -f testdata/storage-provisioner/pod.yaml: (1.725107015s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220202214710-386638 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [da7d5ca6-106c-404a-860b-bfc4821e2f9c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [da7d5ca6-106c-404a-860b-bfc4821e2f9c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [da7d5ca6-106c-404a-860b-bfc4821e2f9c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.006954388s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220202214710-386638 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1584: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1601: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh -n functional-20220202214710-386638 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 cp functional-20220202214710-386638:/home/docker/cp-test.txt /tmp/mk_test3860163128/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh -n functional-20220202214710-386638 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1653: (dbg) Run:  kubectl --context functional-20220202214710-386638 replace --force -f testdata/mysql.yaml
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-b87c45988-59vph" [0117fd5a-4582-471e-a40e-8aa4ab518661] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-59vph" [0117fd5a-4582-471e-a40e-8aa4ab518661] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-59vph" [0117fd5a-4582-471e-a40e-8aa4ab518661] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.006260629s
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220202214710-386638 exec mysql-b87c45988-59vph -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220202214710-386638 exec mysql-b87c45988-59vph -- mysql -ppassword -e "show databases;": exit status 1 (131.287776ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220202214710-386638 exec mysql-b87c45988-59vph -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220202214710-386638 exec mysql-b87c45988-59vph -- mysql -ppassword -e "show databases;": exit status 1 (136.481192ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220202214710-386638 exec mysql-b87c45988-59vph -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1789: Checking for existence of /etc/test/nested/copy/386638/hosts within VM
functional_test.go:1791: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo cat /etc/test/nested/copy/386638/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1796: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1832: Checking for existence of /etc/ssl/certs/386638.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo cat /etc/ssl/certs/386638.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1832: Checking for existence of /usr/share/ca-certificates/386638.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo cat /usr/share/ca-certificates/386638.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1832: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1833: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1859: Checking for existence of /etc/ssl/certs/3866382.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo cat /etc/ssl/certs/3866382.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1859: Checking for existence of /usr/share/ca-certificates/3866382.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo cat /usr/share/ca-certificates/3866382.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1859: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220202214710-386638 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1887: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo systemctl is-active crio": exit status 1 (416.00901ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2133: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220202214710-386638 docker-env) && out/minikube-linux-amd64 status -p functional-20220202214710-386638"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220202214710-386638 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.2
k8s.gcr.io/kube-proxy:v1.23.2
k8s.gcr.io/kube-controller-manager:v1.23.2
k8s.gcr.io/kube-apiserver:v1.23.2
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220202214710-386638
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-20220202214710-386638
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format table:
|---------------------------------------------|----------------------------------|---------------|--------|
|                    Image                    |               Tag                |   Image ID    |  Size  |
|---------------------------------------------|----------------------------------|---------------|--------|
| k8s.gcr.io/kube-proxy                       | v1.23.2                          | d922ca3da64b3 | 112MB  |
| docker.io/library/nginx                     | alpine                           | bef258acf10dc | 23.4MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.2                          | 4783639ba7e03 | 125MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.2                          | 6114d758d6d16 | 53.5MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.2                          | 8a0228dd6a683 | 135MB  |
| docker.io/kubernetesui/dashboard            | v2.3.1                           | e1482a24335a6 | 220MB  |
| gcr.io/google-containers/addon-resizer      | functional-20220202214710-386638 | ffd4cfbbe753e | 32.9MB |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                           | 7801cfc6d5c07 | 34.4MB |
| k8s.gcr.io/pause                            | 3.3                              | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                     | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8                              | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                          | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                           | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                              | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.1                              | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest                           | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-20220202214710-386638 | 042c548cc1191 | 30B    |
| docker.io/library/nginx                     | latest                           | c316d5a335a5c | 142MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                               | 6e38f40d628db | 31.5MB |
|---------------------------------------------|----------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format json:
[{"id":"8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.2"],"size":"135000000"},{"id":"042c548cc11917e0ddc9e0e077f91b9f4cb98a39596db09b7edf49ffa8ea47d5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220202214710-386638"],"size":"30"},{"id":"6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.2"],"size":"53500000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"7801cfc6d5c072eb114355d369c
830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220202214710-386638"],"size":"32900000"},{"id":"c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.2"],"size":"125000000"},{"id":"d922ca3da64b3f8464058d9ebbc361dd82cc86e
a59cd337a4e33967bc8ede44f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.2"],"size":"112000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format yaml
2022/02/02 21:48:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls --format yaml:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220202214710-386638
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 042c548cc11917e0ddc9e0e077f91b9f4cb98a39596db09b7edf49ffa8ea47d5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220202214710-386638
size: "30"
- id: d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.2
size: "112000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.2
size: "53500000"
- id: 4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.2
size: "125000000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.2
size: "135000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh pgrep buildkitd: exit status 1 (359.513698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image build -t localhost/my-image:functional-20220202214710-386638 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 image build -t localhost/my-image:functional-20220202214710-386638 testdata/build: (4.164711244s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220202214710-386638 image build -t localhost/my-image:functional-20220202214710-386638 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Waiting
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in ec06861c479c
Removing intermediate container ec06861c479c
---> 89ba8f587768
Step 3/3 : ADD content.txt /
---> 746a1c5e68a2
Successfully built 746a1c5e68a2
Successfully tagged localhost/my-image:functional-20220202214710-386638
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.117329999s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220202214710-386638
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1280: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220202214710-386638 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220202214710-386638 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [8afd363f-8b2d-4088-be7b-63c08aedb6a9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [8afd363f-8b2d-4088-be7b-63c08aedb6a9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.013314707s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "462.32956ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1334: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1339: Took "83.742131ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638: (3.768120679s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1376: Took "421.123818ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1384: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1389: Took "70.601252ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638: (2.862795897s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220202214710-386638
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638: (4.05777883s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220202214710-386638 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://10.110.42.195 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220202214710-386638 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image save gcr.io/google-containers/addon-resizer:functional-20220202214710-386638 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 image save gcr.io/google-containers/addon-resizer:functional-20220202214710-386638 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.265613319s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220202214710-386638 /tmp/mounttest1147678805:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1643838528448503072" to /tmp/mounttest1147678805/created-by-test
functional_test_mount_test.go:110: wrote "test-1643838528448503072" to /tmp/mounttest1147678805/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1643838528448503072" to /tmp/mounttest1147678805/test-1643838528448503072
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (386.685028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  2 21:48 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  2 21:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  2 21:48 test-1643838528448503072
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh cat /mount-9p/test-1643838528448503072

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220202214710-386638 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [4426d4c0-25bc-4871-b993-45bf93353d5c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [4426d4c0-25bc-4871-b993-45bf93353d5c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [4426d4c0-25bc-4871-b993-45bf93353d5c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.006265581s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220202214710-386638 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220202214710-386638 /tmp/mounttest1147678805:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image rm gcr.io/google-containers/addon-resizer:functional-20220202214710-386638

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.440884354s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220202214710-386638

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-20220202214710-386638 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220202214710-386638: (2.591782361s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220202214710-386638
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1979: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1979: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1979: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220202214710-386638 /tmp/mounttest2262862740:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (514.727705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220202214710-386638 /tmp/mounttest2262862740:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh "sudo umount -f /mount-9p": exit status 1 (353.044803ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20220202214710-386638 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220202214710-386638 /tmp/mounttest2262862740:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.47s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220202214710-386638
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220202214710-386638
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220202214710-386638
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (51.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220202214922-386638 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0202 21:49:27.875654  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:27.881603  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:27.891884  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:27.912179  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:27.952494  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:28.032816  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:28.193289  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:28.513896  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:29.154208  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:30.434617  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:32.996419  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:38.116958  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:49:48.357268  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:50:08.837788  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220202214922-386638 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (51.704353855s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (51.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 addons enable ingress --alsologtostderr -v=5: (15.156551963s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (62.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202214922-386638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220202214922-386638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.268397692s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202214922-386638 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202214922-386638 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [b50e17f7-149d-4078-979a-25e9fc11265b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [b50e17f7-149d-4078-979a-25e9fc11265b] Running
E0202 21:50:49.798204  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.00545737s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20220202214922-386638 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 addons disable ingress-dns --alsologtostderr -v=1: (8.19535942s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 addons disable ingress --alsologtostderr -v=1
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220202214922-386638 addons disable ingress --alsologtostderr -v=1: (28.486964086s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (62.20s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220202215134-386638 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0202 21:52:11.719232  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220202215134-386638 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (41.311672024s)
--- PASS: TestJSONOutput/start/Command (41.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220202215134-386638 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220202215134-386638 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220202215134-386638 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220202215134-386638 --output=json --user=testUser: (10.891446849s)
--- PASS: TestJSONOutput/stop/Command (10.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220202215229-386638 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220202215229-386638 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.825324ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eaaf8023-5df9-4479-a469-243cce1910d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220202215229-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"00e31a23-425b-43e9-bfb3-4e60286a562e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"584c5414-5f7e-42ab-8f42-0532faf31eb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14bc0f92-fc40-4b21-9fa0-8aa6e8e993fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig"}}
	{"specversion":"1.0","id":"42af7aef-34c4-40a1-ae10-6fb8c768d153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube"}}
	{"specversion":"1.0","id":"79d38d49-8c71-4629-878e-9c57674bc9b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"09f61a02-9e56-48ee-b423-9458b056fb9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220202215229-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220202215229-386638
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220202215229-386638 --network=
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220202215229-386638 --network=: (26.433730411s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220202215229-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220202215229-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220202215229-386638: (2.240177154s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220202215258-386638 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220202215258-386638 --network=bridge: (26.994307753s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220202215258-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220202215258-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220202215258-386638: (2.073509623s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.10s)

                                                
                                    
x
+
TestKicExistingNetwork (28.63s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220202215327-386638 --network=existing-network
E0202 21:53:35.618112  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:35.623420  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:35.633686  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:35.654026  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:35.694317  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:35.774668  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:35.935119  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:36.255725  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:36.896655  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:38.177154  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:40.737967  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:53:45.858784  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220202215327-386638 --network=existing-network: (26.155792592s)
helpers_test.go:176: Cleaning up "existing-network-20220202215327-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220202215327-386638
E0202 21:53:56.098993  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220202215327-386638: (2.258432286s)
--- PASS: TestKicExistingNetwork (28.63s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220202215356-386638 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220202215356-386638 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.896603347s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220202215356-386638 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220202215356-386638 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220202215356-386638 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.535440671s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220202215356-386638 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220202215356-386638 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220202215356-386638 --alsologtostderr -v=5: (1.726379735s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220202215356-386638 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220202215356-386638
mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220202215356-386638: (1.265964364s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220202215356-386638
E0202 21:54:16.579914  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220202215356-386638: (5.821624065s)
--- PASS: TestMountStart/serial/RestartStopped (6.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220202215356-386638 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220202215420-386638 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0202 21:54:27.875573  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:54:55.559906  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 21:54:57.540103  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
E0202 21:55:29.331888  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:29.337134  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:29.347360  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:29.367630  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:29.407884  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:29.488249  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:29.648673  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:29.969379  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:30.609857  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:31.890373  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:55:34.450855  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220202215420-386638 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m14.778836663s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:491: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- rollout status deployment/busybox: (1.962592328s)
multinode_test.go:497: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-8qw7h -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-gwwjj -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-8qw7h -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-gwwjj -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-8qw7h -- nslookup kubernetes.default.svc.cluster.local
E0202 21:55:39.571573  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-gwwjj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-8qw7h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-8qw7h -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-gwwjj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220202215420-386638 -- exec busybox-7978565885-gwwjj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220202215420-386638 -v 3 --alsologtostderr
E0202 21:55:49.812706  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220202215420-386638 -v 3 --alsologtostderr: (27.04677951s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.82s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --output json --alsologtostderr
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp testdata/cp-test.txt multinode-20220202215420-386638:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638 "sudo cat /home/docker/cp-test.txt"
E0202 21:56:10.293901  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638:/home/docker/cp-test.txt /tmp/mk_cp_test664156061/cp-test_multinode-20220202215420-386638.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638:/home/docker/cp-test.txt multinode-20220202215420-386638-m02:/home/docker/cp-test_multinode-20220202215420-386638_multinode-20220202215420-386638-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m02 "sudo cat /home/docker/cp-test_multinode-20220202215420-386638_multinode-20220202215420-386638-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638:/home/docker/cp-test.txt multinode-20220202215420-386638-m03:/home/docker/cp-test_multinode-20220202215420-386638_multinode-20220202215420-386638-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m03 "sudo cat /home/docker/cp-test_multinode-20220202215420-386638_multinode-20220202215420-386638-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp testdata/cp-test.txt multinode-20220202215420-386638-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638-m02:/home/docker/cp-test.txt /tmp/mk_cp_test664156061/cp-test_multinode-20220202215420-386638-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638-m02:/home/docker/cp-test.txt multinode-20220202215420-386638:/home/docker/cp-test_multinode-20220202215420-386638-m02_multinode-20220202215420-386638.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638 "sudo cat /home/docker/cp-test_multinode-20220202215420-386638-m02_multinode-20220202215420-386638.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638-m02:/home/docker/cp-test.txt multinode-20220202215420-386638-m03:/home/docker/cp-test_multinode-20220202215420-386638-m02_multinode-20220202215420-386638-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m03 "sudo cat /home/docker/cp-test_multinode-20220202215420-386638-m02_multinode-20220202215420-386638-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp testdata/cp-test.txt multinode-20220202215420-386638-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638-m03:/home/docker/cp-test.txt /tmp/mk_cp_test664156061/cp-test_multinode-20220202215420-386638-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638-m03:/home/docker/cp-test.txt multinode-20220202215420-386638:/home/docker/cp-test_multinode-20220202215420-386638-m03_multinode-20220202215420-386638.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638 "sudo cat /home/docker/cp-test_multinode-20220202215420-386638-m03_multinode-20220202215420-386638.txt"
E0202 21:56:19.460366  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 cp multinode-20220202215420-386638-m03:/home/docker/cp-test.txt multinode-20220202215420-386638-m02:/home/docker/cp-test_multinode-20220202215420-386638-m03_multinode-20220202215420-386638-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 ssh -n multinode-20220202215420-386638-m02 "sudo cat /home/docker/cp-test_multinode-20220202215420-386638-m03_multinode-20220202215420-386638-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220202215420-386638 node stop m03: (1.274484054s)
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220202215420-386638 status: exit status 7 (610.327232ms)

                                                
                                                
-- stdout --
	multinode-20220202215420-386638
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220202215420-386638-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220202215420-386638-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr: exit status 7 (622.805811ms)

                                                
                                                
-- stdout --
	multinode-20220202215420-386638
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220202215420-386638-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220202215420-386638-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 21:56:22.901137  473248 out.go:297] Setting OutFile to fd 1 ...
	I0202 21:56:22.901228  473248 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:56:22.901238  473248 out.go:310] Setting ErrFile to fd 2...
	I0202 21:56:22.901244  473248 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:56:22.901368  473248 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 21:56:22.901512  473248 out.go:304] Setting JSON to false
	I0202 21:56:22.901530  473248 mustload.go:65] Loading cluster: multinode-20220202215420-386638
	I0202 21:56:22.901891  473248 config.go:176] Loaded profile config "multinode-20220202215420-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 21:56:22.901907  473248 status.go:253] checking status of multinode-20220202215420-386638 ...
	I0202 21:56:22.902247  473248 cli_runner.go:133] Run: docker container inspect multinode-20220202215420-386638 --format={{.State.Status}}
	I0202 21:56:22.935189  473248 status.go:328] multinode-20220202215420-386638 host status = "Running" (err=<nil>)
	I0202 21:56:22.935211  473248 host.go:66] Checking if "multinode-20220202215420-386638" exists ...
	I0202 21:56:22.935445  473248 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220202215420-386638
	I0202 21:56:22.966969  473248 host.go:66] Checking if "multinode-20220202215420-386638" exists ...
	I0202 21:56:22.967231  473248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 21:56:22.967278  473248 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220202215420-386638
	I0202 21:56:22.999043  473248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49287 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/multinode-20220202215420-386638/id_rsa Username:docker}
	I0202 21:56:23.091016  473248 ssh_runner.go:195] Run: systemctl --version
	I0202 21:56:23.094443  473248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 21:56:23.103048  473248 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0202 21:56:23.190747  473248 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-02 21:56:23.13244855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0202 21:56:23.191974  473248 kubeconfig.go:92] found "multinode-20220202215420-386638" server: "https://192.168.49.2:8443"
	I0202 21:56:23.192004  473248 api_server.go:165] Checking apiserver status ...
	I0202 21:56:23.192042  473248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0202 21:56:23.211377  473248 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1711/cgroup
	I0202 21:56:23.218776  473248 api_server.go:181] apiserver freezer: "6:freezer:/docker/15f00059407fa238cb7d8ad7dfcbc7af37b38d30146a8e8b64583ba5b4256713/kubepods/burstable/pod312944f2f6d95f07fe45a2e29d0c17d3/3d2b66ff832571e754c80979c1d9a102261db41442da526ffd605638de6993c3"
	I0202 21:56:23.218854  473248 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15f00059407fa238cb7d8ad7dfcbc7af37b38d30146a8e8b64583ba5b4256713/kubepods/burstable/pod312944f2f6d95f07fe45a2e29d0c17d3/3d2b66ff832571e754c80979c1d9a102261db41442da526ffd605638de6993c3/freezer.state
	I0202 21:56:23.225038  473248 api_server.go:203] freezer state: "THAWED"
	I0202 21:56:23.225083  473248 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0202 21:56:23.229549  473248 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0202 21:56:23.229569  473248 status.go:419] multinode-20220202215420-386638 apiserver status = Running (err=<nil>)
	I0202 21:56:23.229586  473248 status.go:255] multinode-20220202215420-386638 status: &{Name:multinode-20220202215420-386638 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0202 21:56:23.229613  473248 status.go:253] checking status of multinode-20220202215420-386638-m02 ...
	I0202 21:56:23.229871  473248 cli_runner.go:133] Run: docker container inspect multinode-20220202215420-386638-m02 --format={{.State.Status}}
	I0202 21:56:23.262336  473248 status.go:328] multinode-20220202215420-386638-m02 host status = "Running" (err=<nil>)
	I0202 21:56:23.262360  473248 host.go:66] Checking if "multinode-20220202215420-386638-m02" exists ...
	I0202 21:56:23.262644  473248 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220202215420-386638-m02
	I0202 21:56:23.296284  473248 host.go:66] Checking if "multinode-20220202215420-386638-m02" exists ...
	I0202 21:56:23.296599  473248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0202 21:56:23.296648  473248 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220202215420-386638-m02
	I0202 21:56:23.329906  473248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49292 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/multinode-20220202215420-386638-m02/id_rsa Username:docker}
	I0202 21:56:23.423511  473248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0202 21:56:23.432286  473248 status.go:255] multinode-20220202215420-386638-m02 status: &{Name:multinode-20220202215420-386638-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0202 21:56:23.432329  473248 status.go:253] checking status of multinode-20220202215420-386638-m03 ...
	I0202 21:56:23.432574  473248 cli_runner.go:133] Run: docker container inspect multinode-20220202215420-386638-m03 --format={{.State.Status}}
	I0202 21:56:23.464570  473248 status.go:328] multinode-20220202215420-386638-m03 host status = "Stopped" (err=<nil>)
	I0202 21:56:23.464600  473248 status.go:341] host is not running, skipping remaining checks
	I0202 21:56:23.464606  473248 status.go:255] multinode-20220202215420-386638-m03 status: &{Name:multinode-20220202215420-386638-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220202215420-386638 node start m03 --alsologtostderr: (23.775086943s)
multinode_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (131.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220202215420-386638
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220202215420-386638
E0202 21:56:51.254705  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220202215420-386638: (22.648369512s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220202215420-386638 --wait=true -v=8 --alsologtostderr
E0202 21:58:13.175807  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 21:58:35.618222  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220202215420-386638 --wait=true -v=8 --alsologtostderr: (1m49.201253554s)
multinode_test.go:305: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220202215420-386638
--- PASS: TestMultiNode/serial/RestartKeepsNodes (131.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 node delete m03
E0202 21:59:03.300855  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220202215420-386638 node delete m03: (4.590397655s)
multinode_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 stop
multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220202215420-386638 stop: (21.455198292s)
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220202215420-386638 status: exit status 7 (124.432542ms)

                                                
                                                
-- stdout --
	multinode-20220202215420-386638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220202215420-386638-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr: exit status 7 (127.456538ms)

                                                
                                                
-- stdout --
	multinode-20220202215420-386638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220202215420-386638-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0202 21:59:27.014272  487149 out.go:297] Setting OutFile to fd 1 ...
	I0202 21:59:27.014359  487149 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:59:27.014363  487149 out.go:310] Setting ErrFile to fd 2...
	I0202 21:59:27.014366  487149 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0202 21:59:27.014476  487149 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
	I0202 21:59:27.014675  487149 out.go:304] Setting JSON to false
	I0202 21:59:27.014694  487149 mustload.go:65] Loading cluster: multinode-20220202215420-386638
	I0202 21:59:27.015023  487149 config.go:176] Loaded profile config "multinode-20220202215420-386638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0202 21:59:27.015039  487149 status.go:253] checking status of multinode-20220202215420-386638 ...
	I0202 21:59:27.015416  487149 cli_runner.go:133] Run: docker container inspect multinode-20220202215420-386638 --format={{.State.Status}}
	I0202 21:59:27.048349  487149 status.go:328] multinode-20220202215420-386638 host status = "Stopped" (err=<nil>)
	I0202 21:59:27.048382  487149 status.go:341] host is not running, skipping remaining checks
	I0202 21:59:27.048391  487149 status.go:255] multinode-20220202215420-386638 status: &{Name:multinode-20220202215420-386638 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0202 21:59:27.048422  487149 status.go:253] checking status of multinode-20220202215420-386638-m02 ...
	I0202 21:59:27.048677  487149 cli_runner.go:133] Run: docker container inspect multinode-20220202215420-386638-m02 --format={{.State.Status}}
	I0202 21:59:27.080752  487149 status.go:328] multinode-20220202215420-386638-m02 host status = "Stopped" (err=<nil>)
	I0202 21:59:27.080795  487149 status.go:341] host is not running, skipping remaining checks
	I0202 21:59:27.080812  487149 status.go:255] multinode-20220202215420-386638-m02 status: &{Name:multinode-20220202215420-386638-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220202215420-386638 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0202 21:59:27.875778  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220202215420-386638 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.420276115s)
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220202215420-386638 status --alsologtostderr
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.14s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220202215420-386638
multinode_test.go:457: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220202215420-386638-m02 --driver=docker  --container-runtime=docker
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220202215420-386638-m02 --driver=docker  --container-runtime=docker: exit status 14 (77.486276ms)

                                                
                                                
-- stdout --
	* [multinode-20220202215420-386638-m02] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220202215420-386638-m02' is duplicated with machine name 'multinode-20220202215420-386638-m02' in profile 'multinode-20220202215420-386638'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220202215420-386638-m03 --driver=docker  --container-runtime=docker
E0202 22:00:29.331778  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220202215420-386638-m03 --driver=docker  --container-runtime=docker: (26.435524776s)
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220202215420-386638
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220202215420-386638: exit status 80 (347.422567ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220202215420-386638
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220202215420-386638-m03 already exists in multinode-20220202215420-386638-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220202215420-386638-m03
multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220202215420-386638-m03: (2.343369332s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.26s)

                                                
                                    
x
+
TestPreload (119.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220202220059-386638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220202220059-386638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m22.605535069s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220202220059-386638 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220202220059-386638 -- docker pull gcr.io/k8s-minikube/busybox: (1.034141632s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220202220059-386638 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220202220059-386638 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (33.034043387s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220202220059-386638 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20220202220059-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220202220059-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220202220059-386638: (2.511706362s)
--- PASS: TestPreload (119.56s)

                                                
                                    
x
+
TestScheduledStopUnix (100.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220202220258-386638 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220202220258-386638 --memory=2048 --driver=docker  --container-runtime=docker: (26.615552863s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220202220258-386638 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220202220258-386638 -n scheduled-stop-20220202220258-386638
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220202220258-386638 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220202220258-386638 --cancel-scheduled
E0202 22:03:35.617640  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220202220258-386638 -n scheduled-stop-20220202220258-386638
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220202220258-386638
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220202220258-386638 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E0202 22:04:27.875149  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220202220258-386638
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220202220258-386638: exit status 7 (92.11893ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220202220258-386638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220202220258-386638 -n scheduled-stop-20220202220258-386638
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220202220258-386638 -n scheduled-stop-20220202220258-386638: exit status 7 (91.953623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220202220258-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220202220258-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220202220258-386638: (1.869144311s)
--- PASS: TestScheduledStopUnix (100.19s)

                                                
                                    
x
+
TestSkaffold (67.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /tmp/skaffold.exe291626689 version
skaffold_test.go:61: skaffold version: v1.35.2
skaffold_test.go:64: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220202220438-386638 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:64: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220202220438-386638 --memory=2600 --driver=docker  --container-runtime=docker: (27.194176754s)
skaffold_test.go:84: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:108: (dbg) Run:  /tmp/skaffold.exe291626689 run --minikube-profile skaffold-20220202220438-386638 --kube-context skaffold-20220202220438-386638 --status-check=true --port-forward=false --interactive=false
E0202 22:05:29.331549  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
skaffold_test.go:108: (dbg) Done: /tmp/skaffold.exe291626689 run --minikube-profile skaffold-20220202220438-386638 --kube-context skaffold-20220202220438-386638 --status-check=true --port-forward=false --interactive=false: (27.057124691s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-c4d7b79c9-886mz" [366e3338-70e7-4628-8c27-0f8e8ceb6d13] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.009877369s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-6844fc76d8-t284s" [924bc70a-ab3e-4a59-a7d0-ea75edd2c539] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006362875s
helpers_test.go:176: Cleaning up "skaffold-20220202220438-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220202220438-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220202220438-386638: (2.56110078s)
--- PASS: TestSkaffold (67.58s)

                                                
                                    
x
+
TestInsufficientStorage (14.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220202220546-386638 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0202 22:05:50.920182  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220202220546-386638 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (12.193875644s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"69975ad4-b95d-4390-8bc3-3f00a20fef45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220202220546-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f696108c-b05a-49f1-9fba-e4bb4dd456f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"b2c61fde-0f95-4e35-8f42-4e0c1144cd4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e07e2f37-4576-466b-bc7e-a6a933da3c1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig"}}
	{"specversion":"1.0","id":"9ec292a1-95f2-4249-ae30-33ed3397d8ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube"}}
	{"specversion":"1.0","id":"3494b992-8594-4150-af19-e192855f23ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fc6b6cb3-de30-4907-8ada-f0f0bafb22a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1576c199-1347-4437-909e-e908389daf39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9f7af05-10cb-40ad-b6ef-4d86c5107872","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"bde1204a-1124-493c-9e28-4cf4c655fcdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"8431cd69-3f22-4ec7-b99f-3740185b4370","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220202220546-386638 in cluster insufficient-storage-20220202220546-386638","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"62c52fdd-ef25-4c10-8307-b986fe307ae7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"01504555-ba02-4408-b5d1-9d69d45876f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b36e0ca-520e-46c2-ba92-fdf4ae40df97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220202220546-386638 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220202220546-386638 --output=json --layout=cluster: exit status 7 (347.599747ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220202220546-386638","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220202220546-386638","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0202 22:05:58.876489  519320 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220202220546-386638" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220202220546-386638 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220202220546-386638 --output=json --layout=cluster: exit status 7 (343.315324ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220202220546-386638","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220202220546-386638","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0202 22:05:59.220635  519417 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220202220546-386638" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	E0202 22:05:59.232260  519417 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/insufficient-storage-20220202220546-386638/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220202220546-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220202220546-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220202220546-386638: (1.902820348s)
--- PASS: TestInsufficientStorage (14.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (104.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.2561475895.exe start -p running-upgrade-20220202220601-386638 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.2561475895.exe start -p running-upgrade-20220202220601-386638 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m7.519926739s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220202220601-386638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220202220601-386638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.566859593s)
helpers_test.go:176: Cleaning up "running-upgrade-20220202220601-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220202220601-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220202220601-386638: (6.803332024s)
--- PASS: TestRunningBinaryUpgrade (104.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (160.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.600333534s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220202220745-386638
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220202220745-386638: (1.32718664s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220202220745-386638 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220202220745-386638 status --format={{.Host}}: exit status 7 (106.831316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0202 22:08:35.618432  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m31.810029286s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220202220745-386638 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (80.34372ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220202220745-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.3-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220202220745-386638
	    minikube start -p kubernetes-upgrade-20220202220745-386638 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220202220745-3866382 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.3-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220202220745-386638 --kubernetes-version=v1.23.3-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220202220745-386638 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (13.832647859s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220202220745-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220202220745-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220202220745-386638: (6.316041731s)
--- PASS: TestKubernetesUpgrade (160.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (101.52s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.2820379421.exe start -p missing-upgrade-20220202220731-386638 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.2820379421.exe start -p missing-upgrade-20220202220731-386638 --memory=2200 --driver=docker  --container-runtime=docker: (39.717238559s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220202220731-386638

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220202220731-386638: (10.370653092s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220202220731-386638
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220202220731-386638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220202220731-386638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.819715499s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220202220731-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220202220731-386638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220202220731-386638: (2.197180406s)
--- PASS: TestMissingContainerUpgrade (101.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (81.852778ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220202220601-386638] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (67.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --driver=docker  --container-runtime=docker: (1m6.651745891s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220202220601-386638 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (67.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.2999762736.exe start -p stopped-upgrade-20220202220601-386638 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.2999762736.exe start -p stopped-upgrade-20220202220601-386638 --memory=2200 --vm-driver=docker  --container-runtime=docker: (55.968206144s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.2999762736.exe -p stopped-upgrade-20220202220601-386638 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.2999762736.exe -p stopped-upgrade-20220202220601-386638 stop: (2.458828505s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220202220601-386638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220202220601-386638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.265341129s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --no-kubernetes --driver=docker  --container-runtime=docker: (14.263123466s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220202220601-386638 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220202220601-386638 status -o json: exit status 2 (471.171586ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220202220601-386638","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220202220601-386638

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220202220601-386638: (6.332749984s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.07s)

                                                
                                    
x
+
TestPause/serial/Start (61.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220202220718-386638 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220202220718-386638 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m1.845588599s)
--- PASS: TestPause/serial/Start (61.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220202220601-386638
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220202220601-386638: (2.793707384s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (203.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220202220601-386638 --no-kubernetes --driver=docker  --container-runtime=docker: (3m23.645163369s)
--- PASS: TestNoKubernetes/serial/Start (203.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220202220718-386638 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220202220718-386638 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5.119800295s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.13s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220202220718-386638 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220202220718-386638 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220202220718-386638 --output=json --layout=cluster: exit status 2 (385.602033ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220202220718-386638","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220202220718-386638","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220202220718-386638 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220202220718-386638 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.48s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220202220718-386638 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220202220718-386638 --alsologtostderr -v=5: (2.484397746s)
--- PASS: TestPause/serial/DeletePaused (2.48s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220202220718-386638
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220202220718-386638: exit status 1 (32.846519ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220202220718-386638

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (122.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220202221025-386638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0202 22:10:29.331409  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 22:10:33.762756  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:33.768036  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:33.778301  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:33.798612  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:33.838905  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:33.919249  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:34.079684  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:34.400250  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:35.040409  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:36.320930  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:10:38.881403  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220202221025-386638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m2.342789624s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (122.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220202221047-386638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220202221047-386638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0: (1m2.557610743s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220202220601-386638 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220202220601-386638 "sudo systemctl is-active --quiet service kubelet": exit status 1 (438.597486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-linux-amd64 profile list
E0202 22:10:54.243573  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:180: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.123551112s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220202220601-386638
no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220202220601-386638: (1.337867399s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220202221105-386638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2
E0202 22:11:14.724264  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220202221105-386638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2: (48.345245141s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220202221047-386638 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [529e3bf2-e981-4a21-a0a9-64e4b1eb41a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0202 22:11:52.376888  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
helpers_test.go:343: "busybox" [529e3bf2-e981-4a21-a0a9-64e4b1eb41a7] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.011493611s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220202221047-386638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220202221105-386638 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [44ce323e-a0d7-48f0-a12d-0d3c789063e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0202 22:11:55.741303  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
helpers_test.go:343: "busybox" [44ce323e-a0d7-48f0-a12d-0d3c789063e9] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.012413175s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220202221105-386638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220202221047-386638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220202221047-386638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220202221047-386638 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220202221047-386638 --alsologtostderr -v=3: (10.858263879s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220202221105-386638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220202221105-386638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220202221105-386638 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220202221105-386638 --alsologtostderr -v=3: (10.904511922s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638: exit status 7 (99.339889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220202221047-386638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220202221047-386638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220202221047-386638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0: (5m36.304465769s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638: exit status 7 (99.471305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220202221105-386638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (339.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220202221105-386638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220202221105-386638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2: (5m39.135767184s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (339.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220202221025-386638 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [95f266b7-7b06-4601-bc5a-12668bc471bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [95f266b7-7b06-4601-bc5a-12668bc471bf] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.01158066s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220202221025-386638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220202221025-386638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220202221025-386638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220202221025-386638 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220202221025-386638 --alsologtostderr -v=3: (10.923857808s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638: exit status 7 (102.805873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220202221025-386638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (412.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220202221025-386638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220202221025-386638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (6m52.111874676s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (412.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (44.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220202221253-386638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2
E0202 22:13:17.662138  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:13:35.618475  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220202221253-386638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2: (44.68143416s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (44.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220202221253-386638 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [bd38b672-560f-4c43-81c4-dc732e0d3086] Pending
helpers_test.go:343: "busybox" [bd38b672-560f-4c43-81c4-dc732e0d3086] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [bd38b672-560f-4c43-81c4-dc732e0d3086] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.011633924s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220202221253-386638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220202221253-386638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220202221253-386638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220202221253-386638 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220202221253-386638 --alsologtostderr -v=3: (10.867825572s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638: exit status 7 (101.215414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220202221253-386638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (340.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220202221253-386638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2
E0202 22:14:27.875448  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/addons-20220202214221-386638/client.crt: no such file or directory
E0202 22:15:29.331138  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory
E0202 22:15:33.761561  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory
E0202 22:16:01.502557  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/skaffold-20220202220438-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220202221253-386638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.2: (5m40.218877675s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (340.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-rwwsv" [2a29d1e0-51b6-4670-9043-4dcccda30188] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-rwwsv" [2a29d1e0-51b6-4670-9043-4dcccda30188] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.012171555s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-rwwsv" [2a29d1e0-51b6-4670-9043-4dcccda30188] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006390568s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220202221047-386638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-wkq74" [c0720e13-283b-491b-b181-d472ba27de45] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013221012s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220202221047-386638 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-wkq74" [c0720e13-283b-491b-b181-d472ba27de45] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006658325s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220202221105-386638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220202221047-386638 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638: exit status 2 (444.43172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638: exit status 2 (420.711768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220202221047-386638 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220202221047-386638 -n no-preload-20220202221047-386638
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220202221105-386638 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220202221105-386638 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638: exit status 2 (453.639333ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638: exit status 2 (441.735897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220202221105-386638 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220202221105-386638 -n embed-certs-20220202221105-386638
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220202221806-386638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220202221806-386638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0: (41.047237585s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker
E0202 22:18:35.618524  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/functional-20220202214710-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (42.845679953s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220202221806-386638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220202221806-386638 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220202221806-386638 --alsologtostderr -v=3: (10.768181711s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638: exit status 7 (101.725646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220202221806-386638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-2gw7m" [ef1e25e6-2968-439a-9847-d64fafcc33da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-2gw7m" [ef1e25e6-2968-439a-9847-d64fafcc33da] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.007277213s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220202221806-386638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220202221806-386638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3-rc.0: (19.548732552s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220202220909-386638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.151012947s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (55.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p false-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (55.2960922s)
--- PASS: TestNetworkPlugins/group/false/Start (55.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220202221806-386638 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220202221806-386638 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638: exit status 2 (410.451542ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638: exit status 2 (418.090687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220202221806-386638 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220202221806-386638 -n newest-cni-20220202221806-386638
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (59.671424279s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-pqgmg" [5cfb420c-d627-437c-9342-c15d3bd5c06e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-pqgmg" [5cfb420c-d627-437c-9342-c15d3bd5c06e] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.013496053s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-9dqmq" [231f7cad-5961-4660-b4b0-d241e8859fe2] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013553623s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-9dqmq" [231f7cad-5961-4660-b4b0-d241e8859fe2] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005454738s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220202221025-386638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-pqgmg" [5cfb420c-d627-437c-9342-c15d3bd5c06e] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008081414s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220202221253-386638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220202221025-386638 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220202221025-386638 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638: exit status 2 (443.156319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638: exit status 2 (507.851964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220202221025-386638 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220202221025-386638 -n old-k8s-version-20220202221025-386638
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220202221253-386638 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220202221253-386638 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638: exit status 2 (460.204098ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638: exit status 2 (477.417335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220202221253-386638 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220202221253-386638 -n default-k8s-different-port-20220202221253-386638
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.86s)
E0202 22:21:49.988638  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:49.993924  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:50.004192  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:50.024467  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:50.064718  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:50.145108  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:50.305511  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:50.625715  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:51.265947  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:52.546760  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:21:55.107600  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
E0202 22:22:00.228440  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (47.700070217s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (48.203261019s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-5k6fz" [b16e3ae2-d3f0-4d3c-9345-e052a331988a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-5k6fz" [b16e3ae2-d3f0-4d3c-9345-e052a331988a] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006874165s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220202220909-386638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0202 22:20:29.331646  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/ingress-addon-legacy-20220202214922-386638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.148234417s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-c4qtx" [4dcfb977-37c9-4613-820e-c8bee0219167] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013273434s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (49.387857696s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-fnp7c" [a773c5a2-37ca-4a5c-8bc1-f41b6d4dab2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-fnp7c" [a773c5a2-37ca-4a5c-8bc1-f41b6d4dab2d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.006535734s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-gfjv8" [cfe1c635-685b-4183-b283-ee3d9ca78a35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-668db85669-gfjv8" [cfe1c635-685b-4183-b283-ee3d9ca78a35] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007614291s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-7vq6m" [7905c7e9-0fea-41a1-b4de-7ec173ee2622] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-668db85669-7vq6m" [7905c7e9-0fea-41a1-b4de-7ec173ee2622] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.007278105s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220202220909-386638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kindnet-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kindnet-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (74.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m14.301812943s)
--- PASS: TestNetworkPlugins/group/cilium/Start (74.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220202220909-386638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:182: (dbg) Run:  kubectl --context enable-default-cni-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220202220909-386638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:182: (dbg) Run:  kubectl --context bridge-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:232: (dbg) Run:  kubectl --context bridge-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (58.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20220202220909-386638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: (58.791985237s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (58.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-2cgz8" [1445073d-5c70-40c1-8bc0-7180f98aa9b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-2cgz8" [1445073d-5c70-40c1-8bc0-7180f98aa9b6] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.006687765s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220202220909-386638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kubenet-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kubenet-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-vmj7f" [d48c1778-f342-405a-9622-913391b743ac] Pending
helpers_test.go:343: "netcat-668db85669-vmj7f" [d48c1778-f342-405a-9622-913391b743ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-vmj7f" [d48c1778-f342-405a-9622-913391b743ac] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 11.005912665s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-q2mlv" [28349dd1-c999-4e84-9b11-f66b4983b862] Running
E0202 22:22:10.468656  386638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13251-383287-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/no-preload-20220202221047-386638/client.crt: no such file or directory
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015237666s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220202220909-386638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220202220909-386638 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-fzb4h" [e244e634-a9aa-4170-988c-afde8d661a2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-668db85669-fzb4h" [e244e634-a9aa-4170-988c-afde8d661a2e] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.007614028s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220202220909-386638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220202220909-386638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                    

Test skip (21/292)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220202221252-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220202221252-386638
--- SKIP: TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220202220909-386638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220202220909-386638
--- SKIP: TestNetworkPlugins/group/flannel (0.37s)

                                                
                                    
Copied to clipboard