Test Report: Docker_Linux_containerd 15452

                    
                      19e7ae6e3352ac9719effe9642660d00444e42cc:2023-07-11:30079
                    
                

Test fail (2/304)

Order failed test Duration
42 TestDockerEnvContainerd 36.25
228 TestMissingContainerUpgrade 141.82
x
+
TestDockerEnvContainerd (36.25s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-475110 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-475110 --driver=docker  --container-runtime=containerd: (21.465086093s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-475110"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Snq78gheuAH9/agent.29722" SSH_AGENT_PID="29723" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:220: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Snq78gheuAH9/agent.29722" SSH_AGENT_PID="29723" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version": exit status 1 (144.427017ms)

                                                
                                                
-- stdout --
	Client: Docker Engine - Community
	 Version:           24.0.4
	 API version:       1.43
	 Go version:        go1.20.5
	 Git commit:        3713ee1
	 Built:             Fri Jul  7 14:50:57 2023
	 OS/Arch:           linux/amd64
	 Context:           default

                                                
                                                
-- /stdout --
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 32777 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
	@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
	Someone could be eavesdropping on you right now (man-in-the-middle attack)!
	It is also possible that a host key has just been changed.
	The fingerprint for the RSA key sent by the remote host is
	SHA256:0dd4zNweIdRYHkfGqN4aegsiN3TXDoc6APRXgCK50I8.
	Please contact your system administrator.
	Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
	Offending RSA key in /home/jenkins/.ssh/known_hosts:55
	  remove with:
	  ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:32777"
	RSA host key for [127.0.0.1]:32777 has changed and you have requested strict checking.
	Host key verification failed.
	

                                                
                                                
** /stderr **
docker_test.go:222: failed to execute 'docker version', error: exit status 1, output: 
-- stdout --
	Client: Docker Engine - Community
	 Version:           24.0.4
	 API version:       1.43
	 Go version:        go1.20.5
	 Git commit:        3713ee1
	 Built:             Fri Jul  7 14:50:57 2023
	 OS/Arch:           linux/amd64
	 Context:           default

                                                
                                                
-- /stdout --
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 32777 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
	@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
	Someone could be eavesdropping on you right now (man-in-the-middle attack)!
	It is also possible that a host key has just been changed.
	The fingerprint for the RSA key sent by the remote host is
	SHA256:0dd4zNweIdRYHkfGqN4aegsiN3TXDoc6APRXgCK50I8.
	Please contact your system administrator.
	Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
	Offending RSA key in /home/jenkins/.ssh/known_hosts:55
	  remove with:
	  ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:32777"
	RSA host key for [127.0.0.1]:32777 has changed and you have requested strict checking.
	Host key verification failed.
	

                                                
                                                
** /stderr **
panic.go:522: *** TestDockerEnvContainerd FAILED at 2023-07-11 00:23:40.532818583 +0000 UTC m=+265.000424978
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect dockerenv-475110
helpers_test.go:235: (dbg) docker inspect dockerenv-475110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "79e632095393a3025e2bbd2422314c3a21ea86cebbb6755f86bf9194f3d1e51b",
	        "Created": "2023-07-11T00:23:14.224209343Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27645,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-11T00:23:14.517465247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f94b2c6a944b0710bfefa359f405db44cc8016d29239db568adfdac750289e32",
	        "ResolvConfPath": "/var/lib/docker/containers/79e632095393a3025e2bbd2422314c3a21ea86cebbb6755f86bf9194f3d1e51b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/79e632095393a3025e2bbd2422314c3a21ea86cebbb6755f86bf9194f3d1e51b/hostname",
	        "HostsPath": "/var/lib/docker/containers/79e632095393a3025e2bbd2422314c3a21ea86cebbb6755f86bf9194f3d1e51b/hosts",
	        "LogPath": "/var/lib/docker/containers/79e632095393a3025e2bbd2422314c3a21ea86cebbb6755f86bf9194f3d1e51b/79e632095393a3025e2bbd2422314c3a21ea86cebbb6755f86bf9194f3d1e51b-json.log",
	        "Name": "/dockerenv-475110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-475110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-475110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/390118a2593da7df4ee1ad2bb1058b4bd8b457e383fe406f65a4245c55514e2d-init/diff:/var/lib/docker/overlay2/55b5fe6b3fc30d9e85420ca3c89aed7630d4c18413967242922137edeac91683/diff",
	                "MergedDir": "/var/lib/docker/overlay2/390118a2593da7df4ee1ad2bb1058b4bd8b457e383fe406f65a4245c55514e2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/390118a2593da7df4ee1ad2bb1058b4bd8b457e383fe406f65a4245c55514e2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/390118a2593da7df4ee1ad2bb1058b4bd8b457e383fe406f65a4245c55514e2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-475110",
	                "Source": "/var/lib/docker/volumes/dockerenv-475110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-475110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-475110",
	                "name.minikube.sigs.k8s.io": "dockerenv-475110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f575ff0abc0b785ff710b75d094975eb13906d0279e401c7a0e8aa5d71034435",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32777"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32776"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32773"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32774"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f575ff0abc0b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-475110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "79e632095393",
	                        "dockerenv-475110"
	                    ],
	                    "NetworkID": "18ae74e98de792424e9303e169468c5e63a5d0fe290a647c7eefcca23546d780",
	                    "EndpointID": "df48e65947fb8973062852e0de10af14be8de72f2076a3907afd17b21f2d2098",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-475110 -n dockerenv-475110
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-475110 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p dockerenv-475110 logs -n 25: (1.032682597s)
helpers_test.go:252: TestDockerEnvContainerd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete     | -p download-docker-595423      | download-docker-595423 | jenkins | v1.30.1 | 11 Jul 23 00:19 UTC | 11 Jul 23 00:19 UTC |
	| start      | --download-only -p             | binary-mirror-294553   | jenkins | v1.30.1 | 11 Jul 23 00:19 UTC |                     |
	|            | binary-mirror-294553           |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --binary-mirror                |                        |         |         |                     |                     |
	|            | http://127.0.0.1:45965         |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete     | -p binary-mirror-294553        | binary-mirror-294553   | jenkins | v1.30.1 | 11 Jul 23 00:19 UTC | 11 Jul 23 00:19 UTC |
	| start      | -p addons-906872               | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:19 UTC | 11 Jul 23 00:21 UTC |
	|            | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --addons=registry              |                        |         |         |                     |                     |
	|            | --addons=metrics-server        |                        |         |         |                     |                     |
	|            | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|            | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|            | --addons=gcp-auth              |                        |         |         |                     |                     |
	|            | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|            | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	|            | --addons=ingress               |                        |         |         |                     |                     |
	|            | --addons=ingress-dns           |                        |         |         |                     |                     |
	|            | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons     | enable headlamp                | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | -p addons-906872               |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | disable cloud-spanner -p       | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | addons-906872                  |                        |         |         |                     |                     |
	| addons     | addons-906872 addons disable   | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| ip         | addons-906872 ip               | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	| addons     | addons-906872 addons disable   | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | registry --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-906872 addons           | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | disable metrics-server         |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | disable inspektor-gadget -p    | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | addons-906872                  |                        |         |         |                     |                     |
	| ssh        | addons-906872 ssh curl -s      | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|            | nginx.example.com'             |                        |         |         |                     |                     |
	| ip         | addons-906872 ip               | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	| addons     | addons-906872 addons disable   | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-906872 addons disable   | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:21 UTC | 11 Jul 23 00:21 UTC |
	|            | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons     | addons-906872 addons           | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:22 UTC | 11 Jul 23 00:22 UTC |
	|            | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-906872 addons           | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:22 UTC | 11 Jul 23 00:22 UTC |
	|            | disable volumesnapshots        |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-906872 addons disable   | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:22 UTC | 11 Jul 23 00:22 UTC |
	|            | gcp-auth --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| stop       | -p addons-906872               | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:22 UTC | 11 Jul 23 00:23 UTC |
	| addons     | enable dashboard -p            | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:23 UTC | 11 Jul 23 00:23 UTC |
	|            | addons-906872                  |                        |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:23 UTC | 11 Jul 23 00:23 UTC |
	|            | addons-906872                  |                        |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:23 UTC | 11 Jul 23 00:23 UTC |
	|            | addons-906872                  |                        |         |         |                     |                     |
	| delete     | -p addons-906872               | addons-906872          | jenkins | v1.30.1 | 11 Jul 23 00:23 UTC | 11 Jul 23 00:23 UTC |
	| start      | -p dockerenv-475110            | dockerenv-475110       | jenkins | v1.30.1 | 11 Jul 23 00:23 UTC | 11 Jul 23 00:23 UTC |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-475110       | jenkins | v1.30.1 | 11 Jul 23 00:23 UTC | 11 Jul 23 00:23 UTC |
	|            | dockerenv-475110               |                        |         |         |                     |                     |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/11 00:23:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0711 00:23:08.063662   27042 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:23:08.063764   27042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:23:08.063767   27042 out.go:309] Setting ErrFile to fd 2...
	I0711 00:23:08.063770   27042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:23:08.063883   27042 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:23:08.064493   27042 out.go:303] Setting JSON to false
	I0711 00:23:08.065564   27042 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":340,"bootTime":1689034648,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:23:08.065618   27042 start.go:137] virtualization: kvm guest
	I0711 00:23:08.069081   27042 out.go:177] * [dockerenv-475110] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0711 00:23:08.071194   27042 out.go:177]   - MINIKUBE_LOCATION=15452
	I0711 00:23:08.071207   27042 notify.go:220] Checking for updates...
	I0711 00:23:08.073315   27042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:23:08.075562   27042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:23:08.078043   27042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:23:08.080259   27042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0711 00:23:08.082287   27042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0711 00:23:08.084404   27042 driver.go:373] Setting default libvirt URI to qemu:///system
	I0711 00:23:08.108834   27042 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0711 00:23:08.108930   27042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:23:08.167714   27042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-11 00:23:08.156879635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:23:08.167825   27042 docker.go:294] overlay module found
	I0711 00:23:08.172726   27042 out.go:177] * Using the docker driver based on user configuration
	I0711 00:23:08.174900   27042 start.go:297] selected driver: docker
	I0711 00:23:08.174908   27042 start.go:944] validating driver "docker" against <nil>
	I0711 00:23:08.174920   27042 start.go:955] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0711 00:23:08.175047   27042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:23:08.226002   27042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-11 00:23:08.218078165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:23:08.226142   27042 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0711 00:23:08.226562   27042 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0711 00:23:08.226700   27042 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0711 00:23:08.228511   27042 out.go:177] * Using Docker driver with root privileges
	I0711 00:23:08.230037   27042 cni.go:84] Creating CNI manager for ""
	I0711 00:23:08.230051   27042 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:23:08.230060   27042 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0711 00:23:08.230067   27042 start_flags.go:319] config:
	{Name:dockerenv-475110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-475110 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:23:08.231642   27042 out.go:177] * Starting control plane node dockerenv-475110 in cluster dockerenv-475110
	I0711 00:23:08.232977   27042 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0711 00:23:08.234390   27042 out.go:177] * Pulling base image ...
	I0711 00:23:08.235821   27042 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0711 00:23:08.235846   27042 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0711 00:23:08.235851   27042 cache.go:57] Caching tarball of preloaded images
	I0711 00:23:08.235861   27042 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 in local docker daemon
	I0711 00:23:08.235914   27042 preload.go:174] Found /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0711 00:23:08.235920   27042 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0711 00:23:08.236204   27042 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/config.json ...
	I0711 00:23:08.236221   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/config.json: {Name:mk57ed728a3f1e4f1920102454d51966fb296089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:08.250094   27042 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 in local docker daemon, skipping pull
	I0711 00:23:08.250102   27042 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 exists in daemon, skipping load
	I0711 00:23:08.250118   27042 cache.go:195] Successfully downloaded all kic artifacts
	I0711 00:23:08.250141   27042 start.go:365] acquiring machines lock for dockerenv-475110: {Name:mk409f8a9bd8a4001e97154bbb1e5a6fb42a52fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0711 00:23:08.250216   27042 start.go:369] acquired machines lock for "dockerenv-475110" in 61.845µs
	I0711 00:23:08.250236   27042 start.go:93] Provisioning new machine with config: &{Name:dockerenv-475110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-475110 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0711 00:23:08.250291   27042 start.go:125] createHost starting for "" (driver="docker")
	I0711 00:23:08.252885   27042 out.go:204] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0711 00:23:08.253088   27042 start.go:159] libmachine.API.Create for "dockerenv-475110" (driver="docker")
	I0711 00:23:08.253107   27042 client.go:168] LocalClient.Create starting
	I0711 00:23:08.253179   27042 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem
	I0711 00:23:08.253201   27042 main.go:141] libmachine: Decoding PEM data...
	I0711 00:23:08.253212   27042 main.go:141] libmachine: Parsing certificate...
	I0711 00:23:08.253256   27042 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem
	I0711 00:23:08.253269   27042 main.go:141] libmachine: Decoding PEM data...
	I0711 00:23:08.253276   27042 main.go:141] libmachine: Parsing certificate...
	I0711 00:23:08.253573   27042 cli_runner.go:164] Run: docker network inspect dockerenv-475110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0711 00:23:08.267797   27042 cli_runner.go:211] docker network inspect dockerenv-475110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0711 00:23:08.267864   27042 network_create.go:281] running [docker network inspect dockerenv-475110] to gather additional debugging logs...
	I0711 00:23:08.267874   27042 cli_runner.go:164] Run: docker network inspect dockerenv-475110
	W0711 00:23:08.284671   27042 cli_runner.go:211] docker network inspect dockerenv-475110 returned with exit code 1
	I0711 00:23:08.284696   27042 network_create.go:284] error running [docker network inspect dockerenv-475110]: docker network inspect dockerenv-475110: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-475110 not found
	I0711 00:23:08.284713   27042 network_create.go:286] output of [docker network inspect dockerenv-475110]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-475110 not found
	
	** /stderr **
	I0711 00:23:08.284769   27042 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0711 00:23:08.302861   27042 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00144a060}
	I0711 00:23:08.302903   27042 network_create.go:123] attempt to create docker network dockerenv-475110 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0711 00:23:08.302955   27042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-475110 dockerenv-475110
	I0711 00:23:08.362864   27042 network_create.go:107] docker network dockerenv-475110 192.168.49.0/24 created
	I0711 00:23:08.362882   27042 kic.go:117] calculated static IP "192.168.49.2" for the "dockerenv-475110" container
	I0711 00:23:08.362944   27042 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0711 00:23:08.380024   27042 cli_runner.go:164] Run: docker volume create dockerenv-475110 --label name.minikube.sigs.k8s.io=dockerenv-475110 --label created_by.minikube.sigs.k8s.io=true
	I0711 00:23:08.397574   27042 oci.go:103] Successfully created a docker volume dockerenv-475110
	I0711 00:23:08.397644   27042 cli_runner.go:164] Run: docker run --rm --name dockerenv-475110-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-475110 --entrypoint /usr/bin/test -v dockerenv-475110:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 -d /var/lib
	I0711 00:23:08.987317   27042 oci.go:107] Successfully prepared a docker volume dockerenv-475110
	I0711 00:23:08.987341   27042 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0711 00:23:08.987359   27042 kic.go:190] Starting extracting preloaded images to volume ...
	I0711 00:23:08.987428   27042 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-475110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 -I lz4 -xf /preloaded.tar -C /extractDir
	I0711 00:23:14.150156   27042 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-475110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 -I lz4 -xf /preloaded.tar -C /extractDir: (5.162682246s)
	I0711 00:23:14.150178   27042 kic.go:199] duration metric: took 5.162816 seconds to extract preloaded images to volume
	W0711 00:23:14.150489   27042 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0711 00:23:14.150571   27042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0711 00:23:14.208307   27042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-475110 --name dockerenv-475110 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-475110 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-475110 --network dockerenv-475110 --ip 192.168.49.2 --volume dockerenv-475110:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667
	I0711 00:23:14.526639   27042 cli_runner.go:164] Run: docker container inspect dockerenv-475110 --format={{.State.Running}}
	I0711 00:23:14.548126   27042 cli_runner.go:164] Run: docker container inspect dockerenv-475110 --format={{.State.Status}}
	I0711 00:23:14.568581   27042 cli_runner.go:164] Run: docker exec dockerenv-475110 stat /var/lib/dpkg/alternatives/iptables
	I0711 00:23:14.626591   27042 oci.go:144] the created container "dockerenv-475110" has a running status.
	I0711 00:23:14.626625   27042 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa...
	I0711 00:23:14.838491   27042 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0711 00:23:14.861670   27042 cli_runner.go:164] Run: docker container inspect dockerenv-475110 --format={{.State.Status}}
	I0711 00:23:14.882092   27042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0711 00:23:14.882106   27042 kic_runner.go:114] Args: [docker exec --privileged dockerenv-475110 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0711 00:23:14.959845   27042 cli_runner.go:164] Run: docker container inspect dockerenv-475110 --format={{.State.Status}}
	I0711 00:23:14.978993   27042 machine.go:88] provisioning docker machine ...
	I0711 00:23:14.979026   27042 ubuntu.go:169] provisioning hostname "dockerenv-475110"
	I0711 00:23:14.979131   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:15.007881   27042 main.go:141] libmachine: Using SSH client type: native
	I0711 00:23:15.008476   27042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32777 <nil> <nil>}
	I0711 00:23:15.008486   27042 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-475110 && echo "dockerenv-475110" | sudo tee /etc/hostname
	I0711 00:23:15.244263   27042 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-475110
	
	I0711 00:23:15.244408   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:15.264038   27042 main.go:141] libmachine: Using SSH client type: native
	I0711 00:23:15.264463   27042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32777 <nil> <nil>}
	I0711 00:23:15.264475   27042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-475110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-475110/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-475110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0711 00:23:15.393673   27042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0711 00:23:15.393694   27042 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15452-3381/.minikube CaCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15452-3381/.minikube}
	I0711 00:23:15.393717   27042 ubuntu.go:177] setting up certificates
	I0711 00:23:15.393725   27042 provision.go:83] configureAuth start
	I0711 00:23:15.393766   27042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-475110
	I0711 00:23:15.410561   27042 provision.go:138] copyHostCerts
	I0711 00:23:15.410607   27042 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem, removing ...
	I0711 00:23:15.410614   27042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem
	I0711 00:23:15.410669   27042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem (1123 bytes)
	I0711 00:23:15.410742   27042 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem, removing ...
	I0711 00:23:15.410745   27042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem
	I0711 00:23:15.410765   27042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem (1679 bytes)
	I0711 00:23:15.410809   27042 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem, removing ...
	I0711 00:23:15.410812   27042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem
	I0711 00:23:15.410828   27042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem (1078 bytes)
	I0711 00:23:15.410866   27042 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem org=jenkins.dockerenv-475110 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube dockerenv-475110]
	I0711 00:23:15.519130   27042 provision.go:172] copyRemoteCerts
	I0711 00:23:15.519180   27042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0711 00:23:15.519224   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:15.536352   27042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa Username:docker}
	I0711 00:23:15.631783   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0711 00:23:15.655679   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0711 00:23:15.676815   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0711 00:23:15.699464   27042 provision.go:86] duration metric: configureAuth took 305.730091ms
	I0711 00:23:15.699478   27042 ubuntu.go:193] setting minikube options for container-runtime
	I0711 00:23:15.699623   27042 config.go:182] Loaded profile config "dockerenv-475110": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:23:15.699627   27042 machine.go:91] provisioned docker machine in 720.621955ms
	I0711 00:23:15.699631   27042 client.go:171] LocalClient.Create took 7.446521147s
	I0711 00:23:15.699648   27042 start.go:167] duration metric: libmachine.API.Create for "dockerenv-475110" took 7.446561196s
	I0711 00:23:15.699658   27042 start.go:300] post-start starting for "dockerenv-475110" (driver="docker")
	I0711 00:23:15.699665   27042 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0711 00:23:15.699704   27042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0711 00:23:15.699735   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:15.715853   27042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa Username:docker}
	I0711 00:23:15.803628   27042 ssh_runner.go:195] Run: cat /etc/os-release
	I0711 00:23:15.806435   27042 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0711 00:23:15.806455   27042 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0711 00:23:15.806464   27042 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0711 00:23:15.806468   27042 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0711 00:23:15.806476   27042 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/addons for local assets ...
	I0711 00:23:15.806522   27042 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/files for local assets ...
	I0711 00:23:15.806536   27042 start.go:303] post-start completed in 106.874814ms
	I0711 00:23:15.806802   27042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-475110
	I0711 00:23:15.822938   27042 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/config.json ...
	I0711 00:23:15.823145   27042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0711 00:23:15.823175   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:15.837906   27042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa Username:docker}
	I0711 00:23:15.922380   27042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0711 00:23:15.926305   27042 start.go:128] duration metric: createHost completed in 7.6760015s
	I0711 00:23:15.926332   27042 start.go:83] releasing machines lock for "dockerenv-475110", held for 7.676107239s
	I0711 00:23:15.926390   27042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-475110
	I0711 00:23:15.945375   27042 ssh_runner.go:195] Run: cat /version.json
	I0711 00:23:15.945431   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:15.945451   27042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0711 00:23:15.945521   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:15.963184   27042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa Username:docker}
	I0711 00:23:15.964741   27042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa Username:docker}
	I0711 00:23:16.136392   27042 ssh_runner.go:195] Run: systemctl --version
	I0711 00:23:16.140385   27042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0711 00:23:16.144402   27042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0711 00:23:16.167880   27042 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0711 00:23:16.167952   27042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0711 00:23:16.195648   27042 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0711 00:23:16.195661   27042 start.go:466] detecting cgroup driver to use...
	I0711 00:23:16.195692   27042 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0711 00:23:16.195736   27042 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0711 00:23:16.208814   27042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0711 00:23:16.221522   27042 docker.go:196] disabling cri-docker service (if available) ...
	I0711 00:23:16.221595   27042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0711 00:23:16.236922   27042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0711 00:23:16.249571   27042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0711 00:23:16.331020   27042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0711 00:23:16.414970   27042 docker.go:212] disabling docker service ...
	I0711 00:23:16.415036   27042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0711 00:23:16.435631   27042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0711 00:23:16.446097   27042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0711 00:23:16.521064   27042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0711 00:23:16.599877   27042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0711 00:23:16.609012   27042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0711 00:23:16.622420   27042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0711 00:23:16.631120   27042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0711 00:23:16.639609   27042 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0711 00:23:16.639653   27042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0711 00:23:16.648025   27042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:23:16.656424   27042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0711 00:23:16.664779   27042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:23:16.673237   27042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0711 00:23:16.680950   27042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0711 00:23:16.689343   27042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0711 00:23:16.697850   27042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0711 00:23:16.707063   27042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0711 00:23:16.787080   27042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0711 00:23:16.859016   27042 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0711 00:23:16.859078   27042 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0711 00:23:16.862543   27042 start.go:534] Will wait 60s for crictl version
	I0711 00:23:16.862584   27042 ssh_runner.go:195] Run: which crictl
	I0711 00:23:16.865392   27042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:23:16.897246   27042 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0711 00:23:16.897299   27042 ssh_runner.go:195] Run: containerd --version
	I0711 00:23:16.921383   27042 ssh_runner.go:195] Run: containerd --version
	I0711 00:23:16.946726   27042 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0711 00:23:16.948300   27042 cli_runner.go:164] Run: docker network inspect dockerenv-475110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0711 00:23:16.964177   27042 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0711 00:23:16.968409   27042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0711 00:23:16.979795   27042 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0711 00:23:16.979844   27042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0711 00:23:17.010537   27042 containerd.go:604] all images are preloaded for containerd runtime.
	I0711 00:23:17.010547   27042 containerd.go:518] Images already preloaded, skipping extraction
	I0711 00:23:17.010585   27042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0711 00:23:17.046405   27042 containerd.go:604] all images are preloaded for containerd runtime.
	I0711 00:23:17.046420   27042 cache_images.go:84] Images are preloaded, skipping loading
	I0711 00:23:17.046481   27042 ssh_runner.go:195] Run: sudo crictl info
	I0711 00:23:17.083012   27042 cni.go:84] Creating CNI manager for ""
	I0711 00:23:17.083028   27042 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:23:17.083045   27042 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0711 00:23:17.083062   27042 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-475110 NodeName:dockerenv-475110 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0711 00:23:17.083229   27042 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-475110"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0711 00:23:17.083303   27042 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=dockerenv-475110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:dockerenv-475110 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0711 00:23:17.083378   27042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0711 00:23:17.092548   27042 binaries.go:44] Found k8s binaries, skipping transfer
	I0711 00:23:17.092599   27042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0711 00:23:17.101252   27042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0711 00:23:17.118017   27042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0711 00:23:17.136441   27042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0711 00:23:17.154347   27042 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0711 00:23:17.157625   27042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0711 00:23:17.167538   27042 certs.go:56] Setting up /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110 for IP: 192.168.49.2
	I0711 00:23:17.167571   27042 certs.go:190] acquiring lock for shared ca certs: {Name:mka06d51c60707055e156951f7d4275743d01d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:17.167746   27042 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.key
	I0711 00:23:17.167782   27042 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.key
	I0711 00:23:17.167822   27042 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/client.key
	I0711 00:23:17.167838   27042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/client.crt with IP's: []
	I0711 00:23:17.406189   27042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/client.crt ...
	I0711 00:23:17.406203   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/client.crt: {Name:mk716fe44a5849f225b8d82dc400d922babf7f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:17.406365   27042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/client.key ...
	I0711 00:23:17.406370   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/client.key: {Name:mk5b4f1a1a4e95e992d5d0fee0b7c4e7a9e2173a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:17.406439   27042 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.key.dd3b5fb2
	I0711 00:23:17.406448   27042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0711 00:23:17.492453   27042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.crt.dd3b5fb2 ...
	I0711 00:23:17.492466   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.crt.dd3b5fb2: {Name:mk5aa2d67aa1021c5311acd4a99398dcfb179ec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:17.492625   27042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.key.dd3b5fb2 ...
	I0711 00:23:17.492631   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.key.dd3b5fb2: {Name:mk74171116f542cd25f9c136507556b0b2c913a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:17.492690   27042 certs.go:337] copying /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.crt
	I0711 00:23:17.492771   27042 certs.go:341] copying /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.key
	I0711 00:23:17.492819   27042 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.key
	I0711 00:23:17.492829   27042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.crt with IP's: []
	I0711 00:23:17.607731   27042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.crt ...
	I0711 00:23:17.607746   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.crt: {Name:mk007085f01a0be127cf168b3066af189bae9190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:17.607919   27042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.key ...
	I0711 00:23:17.607925   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.key: {Name:mk0bf16e3f5697d3182b91175c03bb9597a37ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:17.608103   27042 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem (1679 bytes)
	I0711 00:23:17.608146   27042 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem (1078 bytes)
	I0711 00:23:17.608178   27042 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem (1123 bytes)
	I0711 00:23:17.608197   27042 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem (1679 bytes)
	I0711 00:23:17.608774   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0711 00:23:17.634162   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0711 00:23:17.654880   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0711 00:23:17.676921   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/dockerenv-475110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0711 00:23:17.697381   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0711 00:23:17.717758   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0711 00:23:17.739553   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0711 00:23:17.760250   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0711 00:23:17.780164   27042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0711 00:23:17.801314   27042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0711 00:23:17.819780   27042 ssh_runner.go:195] Run: openssl version
	I0711 00:23:17.824401   27042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0711 00:23:17.833466   27042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:23:17.836746   27042 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 11 00:19 /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:23:17.836786   27042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:23:17.842689   27042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0711 00:23:17.851003   27042 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0711 00:23:17.853701   27042 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0711 00:23:17.853749   27042 kubeadm.go:404] StartCluster: {Name:dockerenv-475110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-475110 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:23:17.853821   27042 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0711 00:23:17.853868   27042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0711 00:23:17.884747   27042 cri.go:89] found id: ""
	I0711 00:23:17.884806   27042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0711 00:23:17.893084   27042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0711 00:23:17.901084   27042 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0711 00:23:17.901137   27042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0711 00:23:17.908779   27042 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0711 00:23:17.908826   27042 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0711 00:23:17.956146   27042 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0711 00:23:17.956207   27042 kubeadm.go:322] [preflight] Running pre-flight checks
	I0711 00:23:17.990786   27042 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0711 00:23:17.990863   27042 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0711 00:23:17.990901   27042 kubeadm.go:322] OS: Linux
	I0711 00:23:17.990964   27042 kubeadm.go:322] CGROUPS_CPU: enabled
	I0711 00:23:17.991039   27042 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0711 00:23:17.991076   27042 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0711 00:23:17.991142   27042 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0711 00:23:17.991179   27042 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0711 00:23:17.991219   27042 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0711 00:23:17.991283   27042 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0711 00:23:17.991326   27042 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0711 00:23:17.991366   27042 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0711 00:23:18.051109   27042 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0711 00:23:18.051258   27042 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0711 00:23:18.051416   27042 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0711 00:23:18.246617   27042 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0711 00:23:18.248469   27042 out.go:204]   - Generating certificates and keys ...
	I0711 00:23:18.248610   27042 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0711 00:23:18.248693   27042 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0711 00:23:18.377231   27042 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0711 00:23:18.498775   27042 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0711 00:23:18.590413   27042 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0711 00:23:18.695190   27042 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0711 00:23:18.823407   27042 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0711 00:23:18.823569   27042 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [dockerenv-475110 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0711 00:23:18.966181   27042 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0711 00:23:18.966385   27042 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-475110 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0711 00:23:19.429237   27042 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0711 00:23:19.619217   27042 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0711 00:23:19.779503   27042 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0711 00:23:19.779599   27042 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0711 00:23:19.965421   27042 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0711 00:23:20.064988   27042 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0711 00:23:20.152794   27042 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0711 00:23:20.378127   27042 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0711 00:23:20.389532   27042 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0711 00:23:20.390332   27042 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0711 00:23:20.390404   27042 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0711 00:23:20.466033   27042 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0711 00:23:20.468235   27042 out.go:204]   - Booting up control plane ...
	I0711 00:23:20.468355   27042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0711 00:23:20.468776   27042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0711 00:23:20.469622   27042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0711 00:23:20.470329   27042 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0711 00:23:20.473600   27042 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0711 00:23:25.975429   27042 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501790 seconds
	I0711 00:23:25.975598   27042 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0711 00:23:25.987295   27042 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0711 00:23:26.506831   27042 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0711 00:23:26.507021   27042 kubeadm.go:322] [mark-control-plane] Marking the node dockerenv-475110 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0711 00:23:27.016327   27042 kubeadm.go:322] [bootstrap-token] Using token: czm3fm.ec4uv26xeskt4l4q
	I0711 00:23:27.017944   27042 out.go:204]   - Configuring RBAC rules ...
	I0711 00:23:27.018102   27042 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0711 00:23:27.020997   27042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0711 00:23:27.028567   27042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0711 00:23:27.031077   27042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0711 00:23:27.033779   27042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0711 00:23:27.036651   27042 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0711 00:23:27.047897   27042 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0711 00:23:27.260687   27042 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0711 00:23:27.476705   27042 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0711 00:23:27.477980   27042 kubeadm.go:322] 
	I0711 00:23:27.478058   27042 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0711 00:23:27.478065   27042 kubeadm.go:322] 
	I0711 00:23:27.478159   27042 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0711 00:23:27.478164   27042 kubeadm.go:322] 
	I0711 00:23:27.478194   27042 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0711 00:23:27.478268   27042 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0711 00:23:27.478336   27042 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0711 00:23:27.478341   27042 kubeadm.go:322] 
	I0711 00:23:27.478413   27042 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0711 00:23:27.478417   27042 kubeadm.go:322] 
	I0711 00:23:27.478477   27042 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0711 00:23:27.478482   27042 kubeadm.go:322] 
	I0711 00:23:27.478544   27042 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0711 00:23:27.478638   27042 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0711 00:23:27.478724   27042 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0711 00:23:27.478729   27042 kubeadm.go:322] 
	I0711 00:23:27.478844   27042 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0711 00:23:27.478940   27042 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0711 00:23:27.478945   27042 kubeadm.go:322] 
	I0711 00:23:27.479050   27042 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token czm3fm.ec4uv26xeskt4l4q \
	I0711 00:23:27.479186   27042 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:97bebf85545463e28e8c1fd71d5fccfa7beefe672ed16176ffafb3a239a4fc4f \
	I0711 00:23:27.479218   27042 kubeadm.go:322] 	--control-plane 
	I0711 00:23:27.479222   27042 kubeadm.go:322] 
	I0711 00:23:27.479330   27042 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0711 00:23:27.479335   27042 kubeadm.go:322] 
	I0711 00:23:27.479438   27042 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token czm3fm.ec4uv26xeskt4l4q \
	I0711 00:23:27.479561   27042 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:97bebf85545463e28e8c1fd71d5fccfa7beefe672ed16176ffafb3a239a4fc4f 
	I0711 00:23:27.482217   27042 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0711 00:23:27.482377   27042 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0711 00:23:27.482396   27042 cni.go:84] Creating CNI manager for ""
	I0711 00:23:27.482408   27042 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:23:27.484851   27042 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0711 00:23:27.486110   27042 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0711 00:23:27.489352   27042 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0711 00:23:27.489360   27042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0711 00:23:27.505914   27042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0711 00:23:28.229215   27042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0711 00:23:28.229334   27042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0711 00:23:28.229418   27042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=72491f7d3796d9f0aa01d4c526b07206f092e604 minikube.k8s.io/name=dockerenv-475110 minikube.k8s.io/updated_at=2023_07_11T00_23_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0711 00:23:28.312027   27042 kubeadm.go:1081] duration metric: took 82.756476ms to wait for elevateKubeSystemPrivileges.
	I0711 00:23:28.312053   27042 ops.go:34] apiserver oom_adj: -16
	I0711 00:23:28.322695   27042 kubeadm.go:406] StartCluster complete in 10.468943736s
	I0711 00:23:28.322725   27042 settings.go:142] acquiring lock: {Name:mk292abf46436ce17435480484ca010f83f19dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:28.322802   27042 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:23:28.323739   27042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/kubeconfig: {Name:mk7a4dda1ca27c23b8e4a4d2dab8f3cedddd8401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:23:28.323995   27042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0711 00:23:28.324015   27042 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0711 00:23:28.324102   27042 addons.go:66] Setting storage-provisioner=true in profile "dockerenv-475110"
	I0711 00:23:28.324111   27042 addons.go:66] Setting default-storageclass=true in profile "dockerenv-475110"
	I0711 00:23:28.324122   27042 addons.go:228] Setting addon storage-provisioner=true in "dockerenv-475110"
	I0711 00:23:28.324126   27042 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-475110"
	I0711 00:23:28.324170   27042 host.go:66] Checking if "dockerenv-475110" exists ...
	I0711 00:23:28.324213   27042 config.go:182] Loaded profile config "dockerenv-475110": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:23:28.324512   27042 cli_runner.go:164] Run: docker container inspect dockerenv-475110 --format={{.State.Status}}
	I0711 00:23:28.324660   27042 cli_runner.go:164] Run: docker container inspect dockerenv-475110 --format={{.State.Status}}
	I0711 00:23:28.348635   27042 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0711 00:23:28.350230   27042 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0711 00:23:28.350243   27042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0711 00:23:28.350319   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:28.369936   27042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa Username:docker}
	I0711 00:23:28.391886   27042 addons.go:228] Setting addon default-storageclass=true in "dockerenv-475110"
	I0711 00:23:28.391924   27042 host.go:66] Checking if "dockerenv-475110" exists ...
	I0711 00:23:28.392379   27042 cli_runner.go:164] Run: docker container inspect dockerenv-475110 --format={{.State.Status}}
	I0711 00:23:28.409088   27042 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0711 00:23:28.409098   27042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0711 00:23:28.409146   27042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-475110
	I0711 00:23:28.423520   27042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/dockerenv-475110/id_rsa Username:docker}
	I0711 00:23:28.577480   27042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0711 00:23:28.591278   27042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0711 00:23:28.591835   27042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0711 00:23:28.896998   27042 kapi.go:248] "coredns" deployment in "kube-system" namespace and "dockerenv-475110" context rescaled to 1 replicas
	I0711 00:23:28.897037   27042 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0711 00:23:28.899116   27042 out.go:177] * Verifying Kubernetes components...
	I0711 00:23:28.901599   27042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0711 00:23:29.221028   27042 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0711 00:23:29.406331   27042 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0711 00:23:29.405238   27042 api_server.go:52] waiting for apiserver process to appear ...
	I0711 00:23:29.407714   27042 addons.go:499] enable addons completed in 1.083702491s: enabled=[storage-provisioner default-storageclass]
	I0711 00:23:29.407737   27042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0711 00:23:29.418007   27042 api_server.go:72] duration metric: took 520.927914ms to wait for apiserver process to appear ...
	I0711 00:23:29.418023   27042 api_server.go:88] waiting for apiserver healthz status ...
	I0711 00:23:29.418039   27042 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0711 00:23:29.423905   27042 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0711 00:23:29.425062   27042 api_server.go:141] control plane version: v1.27.3
	I0711 00:23:29.425073   27042 api_server.go:131] duration metric: took 7.047045ms to wait for apiserver health ...
	I0711 00:23:29.425079   27042 system_pods.go:43] waiting for kube-system pods to appear ...
	I0711 00:23:29.431580   27042 system_pods.go:59] 5 kube-system pods found
	I0711 00:23:29.431595   27042 system_pods.go:61] "etcd-dockerenv-475110" [3f5977b8-6ef7-431b-9c22-2083f647d4a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0711 00:23:29.431602   27042 system_pods.go:61] "kube-apiserver-dockerenv-475110" [48977b65-9e12-4e40-a8f2-7732df1272f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0711 00:23:29.431610   27042 system_pods.go:61] "kube-controller-manager-dockerenv-475110" [ed3b914b-53a2-4195-8853-00808e6e0ad9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0711 00:23:29.431616   27042 system_pods.go:61] "kube-scheduler-dockerenv-475110" [b1c40674-b41e-4cac-b0d8-6253c9280667] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0711 00:23:29.431620   27042 system_pods.go:61] "storage-provisioner" [dd762c58-1970-44db-b9ea-e3484f095468] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0711 00:23:29.431625   27042 system_pods.go:74] duration metric: took 6.542048ms to wait for pod list to return data ...
	I0711 00:23:29.431633   27042 kubeadm.go:581] duration metric: took 534.567065ms to wait for : map[apiserver:true system_pods:true] ...
	I0711 00:23:29.431643   27042 node_conditions.go:102] verifying NodePressure condition ...
	I0711 00:23:29.434137   27042 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0711 00:23:29.434149   27042 node_conditions.go:123] node cpu capacity is 8
	I0711 00:23:29.434157   27042 node_conditions.go:105] duration metric: took 2.510502ms to run NodePressure ...
	I0711 00:23:29.434165   27042 start.go:228] waiting for startup goroutines ...
	I0711 00:23:29.434170   27042 start.go:233] waiting for cluster config update ...
	I0711 00:23:29.434177   27042 start.go:242] writing updated cluster config ...
	I0711 00:23:29.434424   27042 ssh_runner.go:195] Run: rm -f paused
	I0711 00:23:29.479016   27042 start.go:642] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0711 00:23:29.481075   27042 out.go:177] * Done! kubectl is now configured to use "dockerenv-475110" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	73b86d8305afc       b0b1fa0f58c6e       Less than a second ago   Running             kindnet-cni               0                   38aa7d854c12b       kindnet-f5bv9
	8dbd29ade1e60       5780543258cf0       Less than a second ago   Running             kube-proxy                0                   56bcfed86be49       kube-proxy-mrzzf
	e18d2c89493c5       6e38f40d628db       Less than a second ago   Running             storage-provisioner       0                   9e298524653cb       storage-provisioner
	2a65eed88df36       08a0c939e61b7       19 seconds ago           Running             kube-apiserver            0                   ea0760ef679f4       kube-apiserver-dockerenv-475110
	a4321e60c05a9       7cffc01dba0e1       19 seconds ago           Running             kube-controller-manager   0                   9a23dc066d3c1       kube-controller-manager-dockerenv-475110
	fdce337acdf28       41697ceeb70b3       19 seconds ago           Running             kube-scheduler            0                   8626c8799e1be       kube-scheduler-dockerenv-475110
	43137135a85c6       86b6af7dd652c       19 seconds ago           Running             etcd                      0                   4efced6e2e675       etcd-dockerenv-475110
	
	* 
	* ==> containerd <==
	* Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.648260047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:dd762c58-1970-44db-b9ea-e3484f095468,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e298524653cb4a7cb3adf476c79fdc9006f9dfd39f36866eb8331ba153b497e\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.652735210Z" level=info msg="CreateContainer within sandbox \"9e298524653cb4a7cb3adf476c79fdc9006f9dfd39f36866eb8331ba153b497e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.654175750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.654316413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.654333874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.654592312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56bcfed86be49a6163c161de217f36f6d11eb2aced8c286aa4b3f444d02e68d2 pid=1788 runtime=io.containerd.runc.v2
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.662215511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.662376752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.662394610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.662816279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38aa7d854c12b4710a78f1ebfdb11e3007aad8eb91b3258e3b80ea396bd48fdb pid=1783 runtime=io.containerd.runc.v2
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.668632475Z" level=info msg="CreateContainer within sandbox \"9e298524653cb4a7cb3adf476c79fdc9006f9dfd39f36866eb8331ba153b497e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"e18d2c89493c5dc308003373f0f5f915a39b9f5fe659b2b6e2e1fe2b410761e0\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.671602997Z" level=info msg="StartContainer for \"e18d2c89493c5dc308003373f0f5f915a39b9f5fe659b2b6e2e1fe2b410761e0\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.718087559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-44jj5,Uid:336f4aeb-62da-4505-b961-4676c2b63598,Namespace:kube-system,Attempt:0,}"
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.726010438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrzzf,Uid:56200528-ad82-4006-8f43-9b1c688c625e,Namespace:kube-system,Attempt:0,} returns sandbox id \"56bcfed86be49a6163c161de217f36f6d11eb2aced8c286aa4b3f444d02e68d2\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.728785061Z" level=info msg="CreateContainer within sandbox \"56bcfed86be49a6163c161de217f36f6d11eb2aced8c286aa4b3f444d02e68d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.733386043Z" level=info msg="StartContainer for \"e18d2c89493c5dc308003373f0f5f915a39b9f5fe659b2b6e2e1fe2b410761e0\" returns successfully"
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.753685113Z" level=info msg="CreateContainer within sandbox \"56bcfed86be49a6163c161de217f36f6d11eb2aced8c286aa4b3f444d02e68d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8dbd29ade1e603f04ffe0d5f3e0424c4f0ad9dc152c3b639b6b8dddc3922f4bd\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.754742703Z" level=info msg="StartContainer for \"8dbd29ade1e603f04ffe0d5f3e0424c4f0ad9dc152c3b639b6b8dddc3922f4bd\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.773597019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-44jj5,Uid:336f4aeb-62da-4505-b961-4676c2b63598,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\": failed to find network info for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.847296660Z" level=info msg="StartContainer for \"8dbd29ade1e603f04ffe0d5f3e0424c4f0ad9dc152c3b639b6b8dddc3922f4bd\" returns successfully"
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.990493402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-f5bv9,Uid:23c3c053-91d3-4192-9b07-49efab15605d,Namespace:kube-system,Attempt:0,} returns sandbox id \"38aa7d854c12b4710a78f1ebfdb11e3007aad8eb91b3258e3b80ea396bd48fdb\""
	Jul 11 00:23:40 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:40.994619076Z" level=info msg="CreateContainer within sandbox \"38aa7d854c12b4710a78f1ebfdb11e3007aad8eb91b3258e3b80ea396bd48fdb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jul 11 00:23:41 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:41.006771994Z" level=info msg="CreateContainer within sandbox \"38aa7d854c12b4710a78f1ebfdb11e3007aad8eb91b3258e3b80ea396bd48fdb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"73b86d8305afcb9c4e695ecca586817f0dcca116527a7dfe7fd5df1c86fa612c\""
	Jul 11 00:23:41 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:41.007273033Z" level=info msg="StartContainer for \"73b86d8305afcb9c4e695ecca586817f0dcca116527a7dfe7fd5df1c86fa612c\""
	Jul 11 00:23:41 dockerenv-475110 containerd[779]: time="2023-07-11T00:23:41.179061616Z" level=info msg="StartContainer for \"73b86d8305afcb9c4e695ecca586817f0dcca116527a7dfe7fd5df1c86fa612c\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               dockerenv-475110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-475110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=72491f7d3796d9f0aa01d4c526b07206f092e604
	                    minikube.k8s.io/name=dockerenv-475110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_11T00_23_28_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jul 2023 00:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-475110
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jul 2023 00:23:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Jul 2023 00:23:37 +0000   Tue, 11 Jul 2023 00:23:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Jul 2023 00:23:37 +0000   Tue, 11 Jul 2023 00:23:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Jul 2023 00:23:37 +0000   Tue, 11 Jul 2023 00:23:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Jul 2023 00:23:37 +0000   Tue, 11 Jul 2023 00:23:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-475110
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 d81934830cd2496bb555347ea89c2dc3
	  System UUID:                2e15d93d-0d91-4fba-804a-059e462db103
	  Boot ID:                    749e5c64-747f-4fc7-bbbb-4f890adf6a1e
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-44jj5                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     1s
	  kube-system                 etcd-dockerenv-475110                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15s
	  kube-system                 kindnet-f5bv9                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      1s
	  kube-system                 kube-apiserver-dockerenv-475110             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-controller-manager-dockerenv-475110    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-proxy-mrzzf                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kube-system                 kube-scheduler-dockerenv-475110             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 0s    kube-proxy       
	  Normal  Starting                 14s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s   kubelet          Node dockerenv-475110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s   kubelet          Node dockerenv-475110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s   kubelet          Node dockerenv-475110 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14s   kubelet          Node dockerenv-475110 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s    kubelet          Node dockerenv-475110 status is now: NodeReady
	  Normal  RegisteredNode           2s    node-controller  Node dockerenv-475110 event: Registered Node dockerenv-475110 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul11 00:17]  #2
	[  +0.001532]  #3
	[  +0.000001]  #4
	[  +0.003176] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001794] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001402] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.004127]  #5
	[  +0.000685]  #6
	[  +0.000818]  #7
	[  +0.058377] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.535784] i8042: Warning: Keylock active
	[  +0.008732] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003312] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000715] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000635] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000656] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000616] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000645] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000620] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.003743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[ +11.097059] kauditd_printk_skb: 34 callbacks suppressed
	
	* 
	* ==> etcd [43137135a85c64a35888685b460266f9f47129bf16b8a78128c629d67f6c4f7c] <==
	* {"level":"info","ts":"2023-07-11T00:23:22.003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-07-11T00:23:22.003Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-07-11T00:23:22.004Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-11T00:23:22.004Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-11T00:23:22.004Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-11T00:23:22.004Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-11T00:23:22.004Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-11T00:23:22.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-11T00:23:22.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-11T00:23:22.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-07-11T00:23:22.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-07-11T00:23:22.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-11T00:23:22.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-07-11T00:23:22.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-11T00:23:22.596Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-475110 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-11T00:23:22.597Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-11T00:23:22.598Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-11T00:23:22.598Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  00:23:41 up 6 min,  0 users,  load average: 1.48, 1.15, 0.52
	Linux dockerenv-475110 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [73b86d8305afcb9c4e695ecca586817f0dcca116527a7dfe7fd5df1c86fa612c] <==
	* I0711 00:23:41.278764       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0711 00:23:41.278835       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0711 00:23:41.279088       1 main.go:116] setting mtu 1500 for CNI 
	I0711 00:23:41.279114       1 main.go:146] kindnetd IP family: "ipv4"
	I0711 00:23:41.279145       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0711 00:23:41.674114       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0711 00:23:41.674148       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [2a65eed88df3613e5c437b9072c6e727bc3d93fe0b73e1ff557744c530388640] <==
	* I0711 00:23:24.674597       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0711 00:23:24.675351       1 aggregator.go:152] initial CRD sync complete...
	I0711 00:23:24.675371       1 autoregister_controller.go:141] Starting autoregister controller
	I0711 00:23:24.675380       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0711 00:23:24.675389       1 cache.go:39] Caches are synced for autoregister controller
	E0711 00:23:24.678968       1 controller.go:150] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	E0711 00:23:24.679081       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0711 00:23:24.883692       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0711 00:23:25.207823       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0711 00:23:25.469997       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0711 00:23:25.473307       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0711 00:23:25.473326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0711 00:23:25.888234       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0711 00:23:25.931579       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0711 00:23:26.012308       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0711 00:23:26.019129       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0711 00:23:26.020103       1 controller.go:624] quota admission added evaluator for: endpoints
	I0711 00:23:26.023840       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0711 00:23:26.512898       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0711 00:23:27.246858       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0711 00:23:27.259089       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0711 00:23:27.270121       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0711 00:23:40.221144       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0711 00:23:40.272592       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0711 00:23:40.272594       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [a4321e60c05a923c7c002ff6864addc80e4f5f466147b65522dd99076f3ef91b] <==
	* I0711 00:23:39.363684       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0711 00:23:39.364189       1 shared_informer.go:318] Caches are synced for endpoint
	I0711 00:23:39.365917       1 shared_informer.go:318] Caches are synced for expand
	I0711 00:23:39.370371       1 shared_informer.go:318] Caches are synced for crt configmap
	I0711 00:23:39.371541       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0711 00:23:39.376855       1 shared_informer.go:318] Caches are synced for namespace
	I0711 00:23:39.378089       1 shared_informer.go:318] Caches are synced for service account
	I0711 00:23:39.383463       1 shared_informer.go:318] Caches are synced for deployment
	I0711 00:23:39.424501       1 shared_informer.go:318] Caches are synced for taint
	I0711 00:23:39.424631       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0711 00:23:39.424665       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0711 00:23:39.424757       1 taint_manager.go:211] "Sending events to api server"
	I0711 00:23:39.424789       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="dockerenv-475110"
	I0711 00:23:39.424856       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0711 00:23:39.424987       1 event.go:307] "Event occurred" object="dockerenv-475110" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node dockerenv-475110 event: Registered Node dockerenv-475110 in Controller"
	I0711 00:23:39.479160       1 shared_informer.go:318] Caches are synced for daemon sets
	I0711 00:23:39.494890       1 shared_informer.go:318] Caches are synced for resource quota
	I0711 00:23:39.572777       1 shared_informer.go:318] Caches are synced for resource quota
	I0711 00:23:39.876862       1 shared_informer.go:318] Caches are synced for garbage collector
	I0711 00:23:39.876896       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0711 00:23:39.893080       1 shared_informer.go:318] Caches are synced for garbage collector
	I0711 00:23:40.228710       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1"
	I0711 00:23:40.287204       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mrzzf"
	I0711 00:23:40.287231       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-f5bv9"
	I0711 00:23:40.375076       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-44jj5"
	
	* 
	* ==> kube-proxy [8dbd29ade1e603f04ffe0d5f3e0424c4f0ad9dc152c3b639b6b8dddc3922f4bd] <==
	* I0711 00:23:40.891943       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0711 00:23:40.892006       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0711 00:23:40.892029       1 server_others.go:554] "Using iptables proxy"
	I0711 00:23:40.910345       1 server_others.go:192] "Using iptables Proxier"
	I0711 00:23:40.910390       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0711 00:23:40.910401       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0711 00:23:40.910425       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0711 00:23:40.910464       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0711 00:23:40.911063       1 server.go:658] "Version info" version="v1.27.3"
	I0711 00:23:40.911083       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0711 00:23:40.911724       1 config.go:315] "Starting node config controller"
	I0711 00:23:40.911743       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0711 00:23:40.911863       1 config.go:97] "Starting endpoint slice config controller"
	I0711 00:23:40.911880       1 config.go:188] "Starting service config controller"
	I0711 00:23:40.911921       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0711 00:23:40.911932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0711 00:23:41.012253       1 shared_informer.go:318] Caches are synced for node config
	I0711 00:23:41.012271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0711 00:23:41.012307       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fdce337acdf28dedee3b723d8b597e33789ea4f3fd171079bbf1319bc2e0a92c] <==
	* W0711 00:23:24.673902       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0711 00:23:24.675042       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0711 00:23:24.673800       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0711 00:23:24.675100       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0711 00:23:24.674148       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0711 00:23:24.675132       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0711 00:23:24.674215       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0711 00:23:24.675180       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0711 00:23:24.674270       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0711 00:23:24.675205       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0711 00:23:24.674367       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0711 00:23:24.675227       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0711 00:23:24.674404       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0711 00:23:24.675252       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0711 00:23:24.674528       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0711 00:23:24.675271       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0711 00:23:25.512064       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0711 00:23:25.512131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0711 00:23:25.735576       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0711 00:23:25.735604       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0711 00:23:25.753116       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0711 00:23:25.753155       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0711 00:23:25.764716       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0711 00:23:25.764768       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0711 00:23:28.982668       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 11 00:23:39 dockerenv-475110 kubelet[1504]: I0711 00:23:39.616811    1504 topology_manager.go:212] "Topology Admit Handler"
	Jul 11 00:23:39 dockerenv-475110 kubelet[1504]: I0711 00:23:39.741669    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v7s6\" (UniqueName: \"kubernetes.io/projected/dd762c58-1970-44db-b9ea-e3484f095468-kube-api-access-5v7s6\") pod \"storage-provisioner\" (UID: \"dd762c58-1970-44db-b9ea-e3484f095468\") " pod="kube-system/storage-provisioner"
	Jul 11 00:23:39 dockerenv-475110 kubelet[1504]: I0711 00:23:39.741740    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dd762c58-1970-44db-b9ea-e3484f095468-tmp\") pod \"storage-provisioner\" (UID: \"dd762c58-1970-44db-b9ea-e3484f095468\") " pod="kube-system/storage-provisioner"
	Jul 11 00:23:39 dockerenv-475110 kubelet[1504]: E0711 00:23:39.851414    1504 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 11 00:23:39 dockerenv-475110 kubelet[1504]: E0711 00:23:39.851452    1504 projected.go:198] Error preparing data for projected volume kube-api-access-5v7s6 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 11 00:23:39 dockerenv-475110 kubelet[1504]: E0711 00:23:39.851531    1504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd762c58-1970-44db-b9ea-e3484f095468-kube-api-access-5v7s6 podName:dd762c58-1970-44db-b9ea-e3484f095468 nodeName:}" failed. No retries permitted until 2023-07-11 00:23:40.351507251 +0000 UTC m=+13.129279618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5v7s6" (UniqueName: "kubernetes.io/projected/dd762c58-1970-44db-b9ea-e3484f095468-kube-api-access-5v7s6") pod "storage-provisioner" (UID: "dd762c58-1970-44db-b9ea-e3484f095468") : configmap "kube-root-ca.crt" not found
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.292525    1504 topology_manager.go:212] "Topology Admit Handler"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.293938    1504 topology_manager.go:212] "Topology Admit Handler"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.382791    1504 topology_manager.go:212] "Topology Admit Handler"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.447791    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56200528-ad82-4006-8f43-9b1c688c625e-kube-proxy\") pod \"kube-proxy-mrzzf\" (UID: \"56200528-ad82-4006-8f43-9b1c688c625e\") " pod="kube-system/kube-proxy-mrzzf"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.447849    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/23c3c053-91d3-4192-9b07-49efab15605d-cni-cfg\") pod \"kindnet-f5bv9\" (UID: \"23c3c053-91d3-4192-9b07-49efab15605d\") " pod="kube-system/kindnet-f5bv9"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.447886    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56200528-ad82-4006-8f43-9b1c688c625e-lib-modules\") pod \"kube-proxy-mrzzf\" (UID: \"56200528-ad82-4006-8f43-9b1c688c625e\") " pod="kube-system/kube-proxy-mrzzf"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.447918    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56200528-ad82-4006-8f43-9b1c688c625e-xtables-lock\") pod \"kube-proxy-mrzzf\" (UID: \"56200528-ad82-4006-8f43-9b1c688c625e\") " pod="kube-system/kube-proxy-mrzzf"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.447950    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwfq\" (UniqueName: \"kubernetes.io/projected/56200528-ad82-4006-8f43-9b1c688c625e-kube-api-access-4bwfq\") pod \"kube-proxy-mrzzf\" (UID: \"56200528-ad82-4006-8f43-9b1c688c625e\") " pod="kube-system/kube-proxy-mrzzf"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.448018    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23c3c053-91d3-4192-9b07-49efab15605d-xtables-lock\") pod \"kindnet-f5bv9\" (UID: \"23c3c053-91d3-4192-9b07-49efab15605d\") " pod="kube-system/kindnet-f5bv9"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.448058    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23c3c053-91d3-4192-9b07-49efab15605d-lib-modules\") pod \"kindnet-f5bv9\" (UID: \"23c3c053-91d3-4192-9b07-49efab15605d\") " pod="kube-system/kindnet-f5bv9"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.448096    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zth4\" (UniqueName: \"kubernetes.io/projected/23c3c053-91d3-4192-9b07-49efab15605d-kube-api-access-4zth4\") pod \"kindnet-f5bv9\" (UID: \"23c3c053-91d3-4192-9b07-49efab15605d\") " pod="kube-system/kindnet-f5bv9"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.549136    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5klts\" (UniqueName: \"kubernetes.io/projected/336f4aeb-62da-4505-b961-4676c2b63598-kube-api-access-5klts\") pod \"coredns-5d78c9869d-44jj5\" (UID: \"336f4aeb-62da-4505-b961-4676c2b63598\") " pod="kube-system/coredns-5d78c9869d-44jj5"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: I0711 00:23:40.549281    1504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/336f4aeb-62da-4505-b961-4676c2b63598-config-volume\") pod \"coredns-5d78c9869d-44jj5\" (UID: \"336f4aeb-62da-4505-b961-4676c2b63598\") " pod="kube-system/coredns-5d78c9869d-44jj5"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: E0711 00:23:40.774206    1504 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\": failed to find network info for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\""
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: E0711 00:23:40.774354    1504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\": failed to find network info for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\"" pod="kube-system/coredns-5d78c9869d-44jj5"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: E0711 00:23:40.774418    1504 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\": failed to find network info for sandbox \"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\"" pod="kube-system/coredns-5d78c9869d-44jj5"
	Jul 11 00:23:40 dockerenv-475110 kubelet[1504]: E0711 00:23:40.774711    1504 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-44jj5_kube-system(336f4aeb-62da-4505-b961-4676c2b63598)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-44jj5_kube-system(336f4aeb-62da-4505-b961-4676c2b63598)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\\\": failed to find network info for sandbox \\\"350ba91400b3000260771919ed6fe105621f382f163e1fb4cbe0eedab39f3e06\\\"\"" pod="kube-system/coredns-5d78c9869d-44jj5" podUID=336f4aeb-62da-4505-b961-4676c2b63598
	Jul 11 00:23:41 dockerenv-475110 kubelet[1504]: I0711 00:23:41.441697    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mrzzf" podStartSLOduration=1.4416614220000001 podCreationTimestamp="2023-07-11 00:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-11 00:23:41.433196853 +0000 UTC m=+14.210969230" watchObservedRunningTime="2023-07-11 00:23:41.441661422 +0000 UTC m=+14.219433853"
	Jul 11 00:23:41 dockerenv-475110 kubelet[1504]: I0711 00:23:41.452346    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.452299531 podCreationTimestamp="2023-07-11 00:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-11 00:23:41.441640082 +0000 UTC m=+14.219412457" watchObservedRunningTime="2023-07-11 00:23:41.452299531 +0000 UTC m=+14.230071906"
	
	* 
	* ==> storage-provisioner [e18d2c89493c5dc308003373f0f5f915a39b9f5fe659b2b6e2e1fe2b410761e0] <==
	* I0711 00:23:40.742551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-475110 -n dockerenv-475110
helpers_test.go:261: (dbg) Run:  kubectl --context dockerenv-475110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5d78c9869d-44jj5
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context dockerenv-475110 describe pod coredns-5d78c9869d-44jj5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-475110 describe pod coredns-5d78c9869d-44jj5: exit status 1 (61.700637ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5d78c9869d-44jj5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-475110 describe pod coredns-5d78c9869d-44jj5: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-475110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-475110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-475110: (1.874493035s)
--- FAIL: TestDockerEnvContainerd (36.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (141.82s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.22.0.93712327.exe start -p missing-upgrade-576591 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.22.0.93712327.exe start -p missing-upgrade-576591 --memory=2200 --driver=docker  --container-runtime=containerd: (41.976638541s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-576591
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-576591: (10.458839072s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-576591
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-576591 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0711 00:47:52.878082   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p missing-upgrade-576591 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 90 (1m25.193641407s)

                                                
                                                
-- stdout --
	* [missing-upgrade-576591] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-576591 in cluster missing-upgrade-576591
	* Pulling base image ...
	* Another minikube instance is downloading dependencies... 
	* docker "missing-upgrade-576591" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0711 00:47:50.111118  164676 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:47:50.111276  164676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:47:50.111324  164676 out.go:309] Setting ErrFile to fd 2...
	I0711 00:47:50.111362  164676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:47:50.111562  164676 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:47:50.112491  164676 out.go:303] Setting JSON to false
	I0711 00:47:50.115652  164676 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1822,"bootTime":1689034648,"procs":746,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:47:50.115809  164676 start.go:137] virtualization: kvm guest
	I0711 00:47:50.120084  164676 out.go:177] * [missing-upgrade-576591] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0711 00:47:50.122854  164676 out.go:177]   - MINIKUBE_LOCATION=15452
	I0711 00:47:50.122792  164676 notify.go:220] Checking for updates...
	I0711 00:47:50.149817  164676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:47:50.170072  164676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:47:50.191567  164676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:47:50.223284  164676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0711 00:47:50.293946  164676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0711 00:47:50.304373  164676 config.go:182] Loaded profile config "missing-upgrade-576591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0711 00:47:50.322469  164676 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0711 00:47:50.324481  164676 driver.go:373] Setting default libvirt URI to qemu:///system
	I0711 00:47:50.365289  164676 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0711 00:47:50.365397  164676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:47:50.497462  164676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-11 00:47:50.485078538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:47:50.497605  164676 docker.go:294] overlay module found
	I0711 00:47:50.568952  164676 out.go:177] * Using the docker driver based on existing profile
	I0711 00:47:50.589851  164676 start.go:297] selected driver: docker
	I0711 00:47:50.589885  164676 start.go:944] validating driver "docker" against &{Name:missing-upgrade-576591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-576591 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:47:50.590036  164676 start.go:955] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0711 00:47:50.620157  164676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:47:50.703593  164676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-11 00:47:50.689438006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:47:50.703979  164676 cni.go:84] Creating CNI manager for ""
	I0711 00:47:50.704006  164676 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:47:50.704023  164676 start_flags.go:319] config:
	{Name:missing-upgrade-576591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-576591 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0}
	I0711 00:47:50.831897  164676 out.go:177] * Starting control plane node missing-upgrade-576591 in cluster missing-upgrade-576591
	I0711 00:47:50.874588  164676 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0711 00:47:50.912746  164676 out.go:177] * Pulling base image ...
	I0711 00:47:50.916460  164676 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0711 00:47:50.916587  164676 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0711 00:47:50.937826  164676 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4
	I0711 00:47:50.937853  164676 cache.go:57] Caching tarball of preloaded images
	I0711 00:47:50.948094  164676 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0711 00:47:50.948129  164676 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0711 00:47:51.134601  164676 out.go:204] * Another minikube instance is downloading dependencies... 
	I0711 00:47:53.954673  164676 preload.go:174] Found /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0711 00:47:53.954709  164676 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0711 00:47:53.954969  164676 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/config.json ...
	I0711 00:47:53.955272  164676 cache.go:195] Successfully downloaded all kic artifacts
	I0711 00:47:53.955309  164676 start.go:365] acquiring machines lock for missing-upgrade-576591: {Name:mk1b857bfe4365a626c5f998df2b42c763a4d96a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0711 00:47:53.955378  164676 start.go:369] acquired machines lock for "missing-upgrade-576591" in 43.959µs
	I0711 00:47:53.955393  164676 start.go:96] Skipping create...Using existing machine configuration
	I0711 00:47:53.955400  164676 fix.go:54] fixHost starting: 
	I0711 00:47:53.955769  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:53.981179  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:47:53.981264  164676 fix.go:102] recreateIfNeeded on missing-upgrade-576591: state= err=unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:53.981302  164676 fix.go:107] machineExists: false. err=machine does not exist
	I0711 00:47:53.982769  164676 out.go:177] * docker "missing-upgrade-576591" container is missing, will recreate.
	I0711 00:47:53.984029  164676 delete.go:124] DEMOLISHING missing-upgrade-576591 ...
	I0711 00:47:53.984079  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:54.006736  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	W0711 00:47:54.006811  164676 stop.go:75] unable to get state: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:54.006836  164676 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:54.007371  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:54.045639  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:47:54.045721  164676 delete.go:82] Unable to get host status for missing-upgrade-576591, assuming it has already been deleted: state: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:54.045796  164676 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-576591
	W0711 00:47:54.074065  164676 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-576591 returned with exit code 1
	I0711 00:47:54.074103  164676 kic.go:367] could not find the container missing-upgrade-576591 to remove it. will try anyways
	I0711 00:47:54.074150  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:54.097650  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	W0711 00:47:54.097734  164676 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:54.097810  164676 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-576591 /bin/bash -c "sudo init 0"
	W0711 00:47:54.122922  164676 cli_runner.go:211] docker exec --privileged -t missing-upgrade-576591 /bin/bash -c "sudo init 0" returned with exit code 1
	I0711 00:47:54.122994  164676 oci.go:647] error shutdown missing-upgrade-576591: docker exec --privileged -t missing-upgrade-576591 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:55.123548  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:55.140783  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:47:55.140855  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:55.140875  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:47:55.140921  164676 retry.go:31] will retry after 538.863319ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:55.680440  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:55.707180  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:47:55.707249  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:55.707278  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:47:55.707311  164676 retry.go:31] will retry after 442.357004ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:56.149948  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:56.166919  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:47:56.167005  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:56.167020  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:47:56.167051  164676 retry.go:31] will retry after 1.212620147s: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:57.380563  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:57.405055  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:47:57.405126  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:57.405152  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:47:57.405191  164676 retry.go:31] will retry after 1.032828774s: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:58.439173  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:47:58.460197  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:47:58.460267  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:47:58.460286  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:47:58.460317  164676 retry.go:31] will retry after 2.779594857s: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:48:01.240529  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:48:01.261072  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:48:01.261144  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:48:01.261158  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:48:01.261184  164676 retry.go:31] will retry after 2.54420255s: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:48:03.806085  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:48:03.822649  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:48:03.822701  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:48:03.822709  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:48:03.822733  164676 retry.go:31] will retry after 8.412449124s: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:48:12.235373  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:48:12.250709  164676 cli_runner.go:211] docker container inspect missing-upgrade-576591 --format={{.State.Status}} returned with exit code 1
	I0711 00:48:12.250785  164676 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	I0711 00:48:12.250800  164676 oci.go:661] temporary error: container missing-upgrade-576591 status is  but expect it to be exited
	I0711 00:48:12.250836  164676 oci.go:88] couldn't shut down missing-upgrade-576591 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-576591": docker container inspect missing-upgrade-576591 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-576591
	 
	I0711 00:48:12.250884  164676 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-576591
	I0711 00:48:12.269724  164676 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-576591
	W0711 00:48:12.288652  164676 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-576591 returned with exit code 1
	I0711 00:48:12.288738  164676 cli_runner.go:164] Run: docker network inspect missing-upgrade-576591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0711 00:48:12.306947  164676 cli_runner.go:164] Run: docker network rm missing-upgrade-576591
	I0711 00:48:12.448014  164676 fix.go:114] Sleeping 1 second for extra luck!
	I0711 00:48:13.448727  164676 start.go:125] createHost starting for "" (driver="docker")
	I0711 00:48:13.450534  164676 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0711 00:48:13.450685  164676 start.go:159] libmachine.API.Create for "missing-upgrade-576591" (driver="docker")
	I0711 00:48:13.450732  164676 client.go:168] LocalClient.Create starting
	I0711 00:48:13.450810  164676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem
	I0711 00:48:13.450848  164676 main.go:141] libmachine: Decoding PEM data...
	I0711 00:48:13.450873  164676 main.go:141] libmachine: Parsing certificate...
	I0711 00:48:13.450960  164676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem
	I0711 00:48:13.450990  164676 main.go:141] libmachine: Decoding PEM data...
	I0711 00:48:13.451004  164676 main.go:141] libmachine: Parsing certificate...
	I0711 00:48:13.451289  164676 cli_runner.go:164] Run: docker network inspect missing-upgrade-576591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0711 00:48:13.469209  164676 cli_runner.go:211] docker network inspect missing-upgrade-576591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0711 00:48:13.469289  164676 network_create.go:281] running [docker network inspect missing-upgrade-576591] to gather additional debugging logs...
	I0711 00:48:13.469317  164676 cli_runner.go:164] Run: docker network inspect missing-upgrade-576591
	W0711 00:48:13.489664  164676 cli_runner.go:211] docker network inspect missing-upgrade-576591 returned with exit code 1
	I0711 00:48:13.489715  164676 network_create.go:284] error running [docker network inspect missing-upgrade-576591]: docker network inspect missing-upgrade-576591: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-576591 not found
	I0711 00:48:13.489740  164676 network_create.go:286] output of [docker network inspect missing-upgrade-576591]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-576591 not found
	
	** /stderr **
	I0711 00:48:13.489830  164676 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0711 00:48:13.509583  164676 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6f3ac6422f7d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:4d:e8:03:fa} reservation:<nil>}
	I0711 00:48:13.510263  164676 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-622643b4f5b5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:01:af:cb:50} reservation:<nil>}
	I0711 00:48:13.511073  164676 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fb48e0}
	I0711 00:48:13.511096  164676 network_create.go:123] attempt to create docker network missing-upgrade-576591 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0711 00:48:13.511145  164676 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-576591 missing-upgrade-576591
	I0711 00:48:13.580375  164676 network_create.go:107] docker network missing-upgrade-576591 192.168.67.0/24 created
	I0711 00:48:13.580410  164676 kic.go:117] calculated static IP "192.168.67.2" for the "missing-upgrade-576591" container
	I0711 00:48:13.580495  164676 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0711 00:48:13.607129  164676 cli_runner.go:164] Run: docker volume create missing-upgrade-576591 --label name.minikube.sigs.k8s.io=missing-upgrade-576591 --label created_by.minikube.sigs.k8s.io=true
	I0711 00:48:13.625311  164676 oci.go:103] Successfully created a docker volume missing-upgrade-576591
	I0711 00:48:13.625431  164676 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-576591-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-576591 --entrypoint /usr/bin/test -v missing-upgrade-576591:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0711 00:48:20.337640  164676 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-576591-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-576591 --entrypoint /usr/bin/test -v missing-upgrade-576591:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (6.712143212s)
	I0711 00:48:20.337674  164676 oci.go:107] Successfully prepared a docker volume missing-upgrade-576591
	I0711 00:48:20.337714  164676 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0711 00:48:20.337742  164676 kic.go:190] Starting extracting preloaded images to volume ...
	I0711 00:48:20.337804  164676 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-576591:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0711 00:48:24.508074  164676 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-576591:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.170202868s)
	I0711 00:48:24.508123  164676 kic.go:199] duration metric: took 4.170372 seconds to extract preloaded images to volume
	W0711 00:48:24.508408  164676 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0711 00:48:24.508713  164676 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0711 00:48:24.566639  164676 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-576591 --name missing-upgrade-576591 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-576591 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-576591 --network missing-upgrade-576591 --ip 192.168.67.2 --volume missing-upgrade-576591:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0711 00:48:24.980301  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Running}}
	I0711 00:48:25.012297  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	I0711 00:48:25.039495  164676 cli_runner.go:164] Run: docker exec missing-upgrade-576591 stat /var/lib/dpkg/alternatives/iptables
	I0711 00:48:25.104152  164676 oci.go:144] the created container "missing-upgrade-576591" has a running status.
	I0711 00:48:25.104187  164676 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa...
	I0711 00:48:25.360441  164676 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0711 00:48:25.387718  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	I0711 00:48:25.422193  164676 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0711 00:48:25.422219  164676 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-576591 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0711 00:48:25.506605  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	I0711 00:48:25.545891  164676 machine.go:88] provisioning docker machine ...
	I0711 00:48:25.545927  164676 ubuntu.go:169] provisioning hostname "missing-upgrade-576591"
	I0711 00:48:25.546029  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:25.563913  164676 main.go:141] libmachine: Using SSH client type: native
	I0711 00:48:25.564662  164676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0711 00:48:25.564686  164676 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-576591 && echo "missing-upgrade-576591" | sudo tee /etc/hostname
	I0711 00:48:25.696609  164676 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-576591
	
	I0711 00:48:25.696688  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:25.723225  164676 main.go:141] libmachine: Using SSH client type: native
	I0711 00:48:25.723905  164676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0711 00:48:25.723935  164676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-576591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-576591/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-576591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0711 00:48:25.849595  164676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0711 00:48:25.849633  164676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15452-3381/.minikube CaCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15452-3381/.minikube}
	I0711 00:48:25.849674  164676 ubuntu.go:177] setting up certificates
	I0711 00:48:25.849684  164676 provision.go:83] configureAuth start
	I0711 00:48:25.849742  164676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-576591
	I0711 00:48:25.868545  164676 provision.go:138] copyHostCerts
	I0711 00:48:25.868611  164676 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem, removing ...
	I0711 00:48:25.868621  164676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem
	I0711 00:48:25.868690  164676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem (1078 bytes)
	I0711 00:48:25.868802  164676 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem, removing ...
	I0711 00:48:25.868815  164676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem
	I0711 00:48:25.868855  164676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem (1123 bytes)
	I0711 00:48:25.868993  164676 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem, removing ...
	I0711 00:48:25.869008  164676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem
	I0711 00:48:25.869045  164676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem (1679 bytes)
	I0711 00:48:25.869109  164676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-576591 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-576591]
	I0711 00:48:26.010344  164676 provision.go:172] copyRemoteCerts
	I0711 00:48:26.010431  164676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0711 00:48:26.010471  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:26.028561  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	I0711 00:48:26.127329  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0711 00:48:26.159954  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0711 00:48:26.185663  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0711 00:48:26.210789  164676 provision.go:86] duration metric: configureAuth took 361.090058ms
	I0711 00:48:26.210824  164676 ubuntu.go:193] setting minikube options for container-runtime
	I0711 00:48:26.211054  164676 config.go:182] Loaded profile config "missing-upgrade-576591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0711 00:48:26.211075  164676 machine.go:91] provisioned docker machine in 665.161884ms
	I0711 00:48:26.211083  164676 client.go:171] LocalClient.Create took 12.760342163s
	I0711 00:48:26.211102  164676 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-576591" took 12.760417225s
	I0711 00:48:26.211117  164676 start.go:300] post-start starting for "missing-upgrade-576591" (driver="docker")
	I0711 00:48:26.211134  164676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0711 00:48:26.211192  164676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0711 00:48:26.211237  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:26.235763  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	I0711 00:48:26.331137  164676 ssh_runner.go:195] Run: cat /etc/os-release
	I0711 00:48:26.333828  164676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0711 00:48:26.333847  164676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0711 00:48:26.333858  164676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0711 00:48:26.333864  164676 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0711 00:48:26.333872  164676 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/addons for local assets ...
	I0711 00:48:26.333909  164676 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/files for local assets ...
	I0711 00:48:26.333997  164676 filesync.go:149] local asset: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem -> 101682.pem in /etc/ssl/certs
	I0711 00:48:26.334093  164676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0711 00:48:26.340629  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem --> /etc/ssl/certs/101682.pem (1708 bytes)
	I0711 00:48:26.359266  164676 start.go:303] post-start completed in 148.116205ms
	I0711 00:48:26.359704  164676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-576591
	I0711 00:48:26.387789  164676 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/config.json ...
	I0711 00:48:26.388092  164676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0711 00:48:26.388154  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:26.415543  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	I0711 00:48:26.506334  164676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0711 00:48:26.514434  164676 start.go:128] duration metric: createHost completed in 13.065659744s
	I0711 00:48:26.514551  164676 cli_runner.go:164] Run: docker container inspect missing-upgrade-576591 --format={{.State.Status}}
	W0711 00:48:26.533634  164676 fix.go:128] unexpected machine state, will restart: <nil>
	I0711 00:48:26.533654  164676 machine.go:88] provisioning docker machine ...
	I0711 00:48:26.533670  164676 ubuntu.go:169] provisioning hostname "missing-upgrade-576591"
	I0711 00:48:26.533716  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:26.557480  164676 main.go:141] libmachine: Using SSH client type: native
	I0711 00:48:26.558209  164676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0711 00:48:26.558237  164676 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-576591 && echo "missing-upgrade-576591" | sudo tee /etc/hostname
	I0711 00:48:26.698198  164676 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-576591
	
	I0711 00:48:26.698282  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:26.718761  164676 main.go:141] libmachine: Using SSH client type: native
	I0711 00:48:26.719434  164676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0711 00:48:26.719471  164676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-576591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-576591/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-576591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0711 00:48:26.841882  164676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0711 00:48:26.841911  164676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15452-3381/.minikube CaCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15452-3381/.minikube}
	I0711 00:48:26.841938  164676 ubuntu.go:177] setting up certificates
	I0711 00:48:26.841949  164676 provision.go:83] configureAuth start
	I0711 00:48:26.842036  164676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-576591
	I0711 00:48:26.864734  164676 provision.go:138] copyHostCerts
	I0711 00:48:26.864853  164676 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem, removing ...
	I0711 00:48:26.864869  164676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem
	I0711 00:48:26.864944  164676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem (1078 bytes)
	I0711 00:48:26.865075  164676 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem, removing ...
	I0711 00:48:26.865088  164676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem
	I0711 00:48:26.865132  164676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem (1123 bytes)
	I0711 00:48:26.865237  164676 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem, removing ...
	I0711 00:48:26.865248  164676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem
	I0711 00:48:26.865287  164676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem (1679 bytes)
	I0711 00:48:26.865362  164676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-576591 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-576591]
	I0711 00:48:26.997292  164676 provision.go:172] copyRemoteCerts
	I0711 00:48:26.997343  164676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0711 00:48:26.997384  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:27.023712  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	I0711 00:48:27.119633  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0711 00:48:27.147481  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0711 00:48:27.174971  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0711 00:48:27.203720  164676 provision.go:86] duration metric: configureAuth took 361.751507ms
	I0711 00:48:27.203756  164676 ubuntu.go:193] setting minikube options for container-runtime
	I0711 00:48:27.203987  164676 config.go:182] Loaded profile config "missing-upgrade-576591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0711 00:48:27.204009  164676 machine.go:91] provisioned docker machine in 670.34843ms
	I0711 00:48:27.204018  164676 start.go:300] post-start starting for "missing-upgrade-576591" (driver="docker")
	I0711 00:48:27.204030  164676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0711 00:48:27.204087  164676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0711 00:48:27.204148  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:27.227105  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	I0711 00:48:27.330643  164676 ssh_runner.go:195] Run: cat /etc/os-release
	I0711 00:48:27.336240  164676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0711 00:48:27.336288  164676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0711 00:48:27.336304  164676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0711 00:48:27.336313  164676 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0711 00:48:27.336331  164676 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/addons for local assets ...
	I0711 00:48:27.336412  164676 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/files for local assets ...
	I0711 00:48:27.336527  164676 filesync.go:149] local asset: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem -> 101682.pem in /etc/ssl/certs
	I0711 00:48:27.336659  164676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0711 00:48:27.346870  164676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem --> /etc/ssl/certs/101682.pem (1708 bytes)
	I0711 00:48:27.367246  164676 start.go:303] post-start completed in 163.21189ms
	I0711 00:48:27.367331  164676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0711 00:48:27.367376  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:27.392309  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	I0711 00:48:27.479448  164676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0711 00:48:27.484188  164676 fix.go:56] fixHost completed within 33.528783715s
	I0711 00:48:27.484212  164676 start.go:83] releasing machines lock for "missing-upgrade-576591", held for 33.52882411s
	I0711 00:48:27.484275  164676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-576591
	I0711 00:48:27.504679  164676 ssh_runner.go:195] Run: cat /version.json
	I0711 00:48:27.504742  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:27.504741  164676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0711 00:48:27.504800  164676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-576591
	I0711 00:48:27.525505  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	I0711 00:48:27.528793  164676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/missing-upgrade-576591/id_rsa Username:docker}
	W0711 00:48:27.615184  164676 start.go:483] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0711 00:48:27.615291  164676 ssh_runner.go:195] Run: systemctl --version
	I0711 00:48:27.664068  164676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0711 00:48:27.669692  164676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0711 00:48:27.696798  164676 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0711 00:48:27.696879  164676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0711 00:48:27.724920  164676 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0711 00:48:27.724994  164676 start.go:466] detecting cgroup driver to use...
	I0711 00:48:27.725035  164676 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0711 00:48:27.725088  164676 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0711 00:48:27.735508  164676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0711 00:48:27.751711  164676 docker.go:196] disabling cri-docker service (if available) ...
	I0711 00:48:27.751793  164676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0711 00:48:27.769243  164676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0711 00:48:27.784129  164676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0711 00:48:27.800481  164676 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0711 00:48:27.800587  164676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0711 00:48:27.922968  164676 docker.go:212] disabling docker service ...
	I0711 00:48:27.923026  164676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0711 00:48:27.950986  164676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0711 00:48:27.964144  164676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0711 00:48:28.079453  164676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0711 00:48:28.220873  164676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0711 00:48:28.240943  164676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0711 00:48:28.259027  164676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0711 00:48:28.272319  164676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0711 00:48:28.285097  164676 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0711 00:48:28.285173  164676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0711 00:48:28.297851  164676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:48:28.313025  164676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0711 00:48:28.328244  164676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:48:28.340495  164676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0711 00:48:28.352776  164676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0711 00:48:28.363522  164676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0711 00:48:28.370374  164676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0711 00:48:28.376475  164676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0711 00:48:28.484456  164676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0711 00:48:28.605524  164676 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0711 00:48:28.605629  164676 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0711 00:48:28.610699  164676 start.go:534] Will wait 60s for crictl version
	I0711 00:48:28.610754  164676 ssh_runner.go:195] Run: which crictl
	I0711 00:48:28.613810  164676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:48:28.645390  164676 retry.go:31] will retry after 13.908790766s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:48:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0711 00:48:42.555177  164676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:48:42.582320  164676 retry.go:31] will retry after 20.734644119s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:48:42Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0711 00:49:03.318072  164676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:49:03.359663  164676 retry.go:31] will retry after 11.843050992s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0711 00:49:15.203345  164676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:49:15.234101  164676 out.go:177] 
	W0711 00:49:15.235449  164676 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0711 00:49:15.235464  164676 out.go:239] * 
	* 
	W0711 00:49:15.236240  164676 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0711 00:49:15.238021  164676 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.22.0. args: out/minikube-linux-amd64 start -p missing-upgrade-576591 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-07-11 00:49:15.264021093 +0000 UTC m=+1799.731627488
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-576591
helpers_test.go:235: (dbg) docker inspect missing-upgrade-576591:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56f5860fcc8a1b009471dbe486e53ebca24bab9ad2b3c7f23d45d30cc3948777",
	        "Created": "2023-07-11T00:48:24.590686417Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-11T00:48:24.967373278Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/56f5860fcc8a1b009471dbe486e53ebca24bab9ad2b3c7f23d45d30cc3948777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56f5860fcc8a1b009471dbe486e53ebca24bab9ad2b3c7f23d45d30cc3948777/hostname",
	        "HostsPath": "/var/lib/docker/containers/56f5860fcc8a1b009471dbe486e53ebca24bab9ad2b3c7f23d45d30cc3948777/hosts",
	        "LogPath": "/var/lib/docker/containers/56f5860fcc8a1b009471dbe486e53ebca24bab9ad2b3c7f23d45d30cc3948777/56f5860fcc8a1b009471dbe486e53ebca24bab9ad2b3c7f23d45d30cc3948777-json.log",
	        "Name": "/missing-upgrade-576591",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-576591:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "missing-upgrade-576591",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/70468a417aa3931a0a01269b1038c1a23b96b793b5ebc77df0c216ba856e0274-init/diff:/var/lib/docker/overlay2/7bd3476dea08a100ad794c204dd68629d1e624a9c09c34a29c590c66bf4354bf/diff:/var/lib/docker/overlay2/b509f3f7d748ca56af266c151d4f7fd15581b3a72e1bffbfd03a95b6eb7caa9b/diff:/var/lib/docker/overlay2/c3c68c94258c7b89a9bdaa9b9ce580e9e12cabc5808d025c006219f4c506fe01/diff:/var/lib/docker/overlay2/22a90f3487945465be9ea5c5406e2d7267459e85813a9c4ceecc9d3191318532/diff:/var/lib/docker/overlay2/75c76a4473543ce355ac3b56363f86523ff825809d00f291914d0e2cf48328e0/diff:/var/lib/docker/overlay2/5b2041e465704c2b1f91cb20adf9ad6cfd0aa02cf8cd537b2199686b45868fbb/diff:/var/lib/docker/overlay2/5938cee9131ac9b6af4f609cce70a42c83bb605d8660c5f5d7576ee8bdd8754a/diff:/var/lib/docker/overlay2/55e1ce76a3214b93d8237589eb5efc6a58be1cddbe5e09ca7923e821e8c57c4b/diff:/var/lib/docker/overlay2/1a5c3a7a2bd638dd2661cd7190df4dfbdaef862450fd110b4f6a73e3e1f2f8c9/diff:/var/lib/docker/overlay2/b7b2e0
fb4504ed744e898aa350af62a7f085eaebfbc9f01e94126d41d8c0b346/diff:/var/lib/docker/overlay2/f5accdf6596302089975bc4a033ad6ff67ac9f75bd7c4d9e4fc673ef584f1742/diff:/var/lib/docker/overlay2/1d336560dfbbbb3cad9eb87efb4b6aeb26544589b9eae732d40f2d1824dcbed7/diff:/var/lib/docker/overlay2/2d079b5ff4e2dca590643e6b20e23591cd908f8a23440fdbb9eeefcfc460f26d/diff:/var/lib/docker/overlay2/16cb6b74d0909b88b4dc325724b850f864cb90ce420b889127a50d0be9b4cd43/diff:/var/lib/docker/overlay2/9ff612cf16bcd353538be7b3510f254408f2f5c9fd47792fc7e593b99de11112/diff:/var/lib/docker/overlay2/1b88b1529e3abf11f8e5f91c2a3e67339beb2c774440c9342575752ef82f4638/diff:/var/lib/docker/overlay2/7c5509bd24109a018df25743328b58831fbadc51621cd486bb21a02eaabba0e4/diff:/var/lib/docker/overlay2/2c9bf3ee38d08b0a4eb17bc94fface079e628f68103c0c16db3896d6ce217b0d/diff:/var/lib/docker/overlay2/84f48f02e417bf27f6eda2ae993f52fad927beb32c1e3368cbf30849ab066a1d/diff:/var/lib/docker/overlay2/48bef0d64ca081e39a618a0542fcfe58f213f739556f2253857b77dadef4319a/diff:/var/lib/d
ocker/overlay2/60878ecdbcbd5677af4d3393a5f12bfcdbce66fec73d4bb0162c5f1a2a4da02f/diff:/var/lib/docker/overlay2/9291606ea4893724303593a986480b74e5638c2df9da44312a10d5a58eea1fae/diff:/var/lib/docker/overlay2/6d2f2bd7702ab1d711a7d3c23e393feceec8c86dc56d6f142eb24977bdaedaa1/diff:/var/lib/docker/overlay2/234a3be9668a9ab3ae767a7fe0940da6488ebbe5ebfbd379b79b5adb0cd67792/diff:/var/lib/docker/overlay2/f56ae4b2d2be2293892a3b21cf7d38b14b100796db1f6334e7745693e5f3e023/diff:/var/lib/docker/overlay2/2d03f07874c02cd92555eb05437c9f8df9685de8ae03fff2bdfedab006297ceb/diff:/var/lib/docker/overlay2/f6a124c1a73361e9b62ead83ccb7f55b9e5dd420d935b8d93bce1c33333a9459/diff:/var/lib/docker/overlay2/db0ebe6fdb1d7ba19b8ab5d739cb191232e050aef1f7285b5bec1f311bc96387/diff:/var/lib/docker/overlay2/a0f30a310c00633226ad0d22febf3f25a14d8143cdead21bab663a9d7c1cf4af/diff:/var/lib/docker/overlay2/e205fd8155765d3fd90f62788efe8436921ba50287e623432d0aa1a33def8ba8/diff:/var/lib/docker/overlay2/7a67484f89aca823812e8d73a80cc9797c9b10f69c5eb9ac98fe71a8e19
ab506/diff:/var/lib/docker/overlay2/fd0b8a391baa5fb0caa21b5abe3ac0d59a45ec3488852627f0b57893c338feca/diff:/var/lib/docker/overlay2/1b77784d66b86afbe9d23d3be48e7d606f126762252330feafa09fb3c8268b7c/diff:/var/lib/docker/overlay2/10ca4f6e273035fa2b80258fcbf6ea18c6eb98a62c40883eb761761ad7a94ec6/diff:/var/lib/docker/overlay2/66f6341b3bd50cec15f0648fbed948a7b5a21a0a8940b597007f148d405a75b5/diff:/var/lib/docker/overlay2/471bbf00d20180bd21740ecfe062e1aaa1041d6de18a8b537f654df8353a6eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70468a417aa3931a0a01269b1038c1a23b96b793b5ebc77df0c216ba856e0274/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70468a417aa3931a0a01269b1038c1a23b96b793b5ebc77df0c216ba856e0274/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70468a417aa3931a0a01269b1038c1a23b96b793b5ebc77df0c216ba856e0274/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-576591",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-576591/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-576591",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-576591",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-576591",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dae347d3c6b46c5423d641c1ed81e63d05d0da20f9e0aec8b6419e7882b62386",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dae347d3c6b4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-576591": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "56f5860fcc8a",
	                        "missing-upgrade-576591"
	                    ],
	                    "NetworkID": "ec9dcf8819865aeac3051ced0f6e35b1bd61b9b67026ed00fe95247aca04ebe2",
	                    "EndpointID": "f12fe9e7723121abaa170d6cd265d39c21f59bd3da929373fedabbabd2905f32",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p missing-upgrade-576591 -n missing-upgrade-576591
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p missing-upgrade-576591 -n missing-upgrade-576591: exit status 2 (287.912849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMissingContainerUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMissingContainerUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p missing-upgrade-576591 logs -n 25
helpers_test.go:252: TestMissingContainerUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo cat                            | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo cat                            | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo cat                            | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo docker                         | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo cat                            | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo cat                            | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo cat                            | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo cat                            | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo                                | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo find                           | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-738578 sudo crio                           | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-738578                                     | cilium-738578             | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC | 11 Jul 23 00:48 UTC |
	| start   | -p cert-expiration-867467                            | cert-expiration-867467    | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC | 11 Jul 23 00:49 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-938572                            | running-upgrade-938572    | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC | 11 Jul 23 00:48 UTC |
	| start   | -p force-systemd-flag-255349                         | force-systemd-flag-255349 | jenkins | v1.30.1 | 11 Jul 23 00:48 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/11 00:48:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0711 00:48:56.822905  179066 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:48:56.823222  179066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:48:56.823242  179066 out.go:309] Setting ErrFile to fd 2...
	I0711 00:48:56.823250  179066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:48:56.823407  179066 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:48:56.824151  179066 out.go:303] Setting JSON to false
	I0711 00:48:56.826347  179066 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1889,"bootTime":1689034648,"procs":852,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:48:56.826446  179066 start.go:137] virtualization: kvm guest
	I0711 00:48:56.829675  179066 out.go:177] * [force-systemd-flag-255349] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0711 00:48:56.831633  179066 out.go:177]   - MINIKUBE_LOCATION=15452
	I0711 00:48:56.833367  179066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:48:56.831686  179066 notify.go:220] Checking for updates...
	I0711 00:48:56.835155  179066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:48:56.837133  179066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:48:56.838891  179066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0711 00:48:56.840303  179066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0711 00:48:56.842043  179066 config.go:182] Loaded profile config "cert-expiration-867467": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:48:56.842159  179066 config.go:182] Loaded profile config "kubernetes-upgrade-840111": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:48:56.842264  179066 config.go:182] Loaded profile config "missing-upgrade-576591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0711 00:48:56.842379  179066 driver.go:373] Setting default libvirt URI to qemu:///system
	I0711 00:48:56.870174  179066 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0711 00:48:56.870281  179066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:48:56.940708  179066 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:74 SystemTime:2023-07-11 00:48:56.931829062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:48:56.940841  179066 docker.go:294] overlay module found
	I0711 00:48:56.942919  179066 out.go:177] * Using the docker driver based on user configuration
	I0711 00:48:56.944312  179066 start.go:297] selected driver: docker
	I0711 00:48:56.944328  179066 start.go:944] validating driver "docker" against <nil>
	I0711 00:48:56.944343  179066 start.go:955] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0711 00:48:56.945219  179066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:48:57.009877  179066 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:74 SystemTime:2023-07-11 00:48:56.999644886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:48:57.010076  179066 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0711 00:48:57.010308  179066 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0711 00:48:57.012524  179066 out.go:177] * Using Docker driver with root privileges
	I0711 00:48:57.013942  179066 cni.go:84] Creating CNI manager for ""
	I0711 00:48:57.013981  179066 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:48:57.013995  179066 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0711 00:48:57.014004  179066 start_flags.go:319] config:
	{Name:force-systemd-flag-255349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-255349 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:48:57.015538  179066 out.go:177] * Starting control plane node force-systemd-flag-255349 in cluster force-systemd-flag-255349
	I0711 00:48:57.016850  179066 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0711 00:48:57.018225  179066 out.go:177] * Pulling base image ...
	I0711 00:48:57.019645  179066 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0711 00:48:57.019702  179066 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0711 00:48:57.019716  179066 cache.go:57] Caching tarball of preloaded images
	I0711 00:48:57.019745  179066 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 in local docker daemon
	I0711 00:48:57.019810  179066 preload.go:174] Found /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0711 00:48:57.019824  179066 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0711 00:48:57.019975  179066 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/config.json ...
	I0711 00:48:57.020005  179066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/config.json: {Name:mkb63624422f6e92d081ff089ee05bdcfeaf483e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:48:57.043523  179066 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 in local docker daemon, skipping pull
	I0711 00:48:57.043564  179066 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 exists in daemon, skipping load
	I0711 00:48:57.043593  179066 cache.go:195] Successfully downloaded all kic artifacts
	I0711 00:48:57.043647  179066 start.go:365] acquiring machines lock for force-systemd-flag-255349: {Name:mk25967e5e3260f1903d90fcd2a6c7e95bf5ff05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0711 00:48:57.043766  179066 start.go:369] acquired machines lock for "force-systemd-flag-255349" in 92.272µs
	I0711 00:48:57.043795  179066 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-255349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-255349 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0711 00:48:57.043906  179066 start.go:125] createHost starting for "" (driver="docker")
	I0711 00:48:56.892163  177516 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-867467 --name cert-expiration-867467 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-867467 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-867467 --network cert-expiration-867467 --ip 192.168.103.2 --volume cert-expiration-867467:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667: (1.033831714s)
	I0711 00:48:56.892235  177516 cli_runner.go:164] Run: docker container inspect cert-expiration-867467 --format={{.State.Running}}
	I0711 00:48:56.915589  177516 cli_runner.go:164] Run: docker container inspect cert-expiration-867467 --format={{.State.Status}}
	I0711 00:48:56.936156  177516 cli_runner.go:164] Run: docker exec cert-expiration-867467 stat /var/lib/dpkg/alternatives/iptables
	I0711 00:48:56.995925  177516 oci.go:144] the created container "cert-expiration-867467" has a running status.
	I0711 00:48:56.995942  177516 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa...
	I0711 00:48:57.194092  177516 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0711 00:48:57.224174  177516 cli_runner.go:164] Run: docker container inspect cert-expiration-867467 --format={{.State.Status}}
	I0711 00:48:57.244673  177516 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0711 00:48:57.244687  177516 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-867467 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0711 00:48:57.337183  177516 cli_runner.go:164] Run: docker container inspect cert-expiration-867467 --format={{.State.Status}}
	I0711 00:48:57.380502  177516 machine.go:88] provisioning docker machine ...
	I0711 00:48:57.380526  177516 ubuntu.go:169] provisioning hostname "cert-expiration-867467"
	I0711 00:48:57.380590  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:48:57.404686  177516 main.go:141] libmachine: Using SSH client type: native
	I0711 00:48:57.405178  177516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32982 <nil> <nil>}
	I0711 00:48:57.405191  177516 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-867467 && echo "cert-expiration-867467" | sudo tee /etc/hostname
	I0711 00:48:57.623486  177516 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-867467
	
	I0711 00:48:57.623597  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:48:57.650600  177516 main.go:141] libmachine: Using SSH client type: native
	I0711 00:48:57.651057  177516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32982 <nil> <nil>}
	I0711 00:48:57.651071  177516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-867467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-867467/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-867467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0711 00:48:57.823631  177516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0711 00:48:57.823661  177516 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15452-3381/.minikube CaCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15452-3381/.minikube}
	I0711 00:48:57.823714  177516 ubuntu.go:177] setting up certificates
	I0711 00:48:57.823725  177516 provision.go:83] configureAuth start
	I0711 00:48:57.823811  177516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-867467
	I0711 00:48:57.846243  177516 provision.go:138] copyHostCerts
	I0711 00:48:57.846314  177516 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem, removing ...
	I0711 00:48:57.846324  177516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem
	I0711 00:48:57.846407  177516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem (1078 bytes)
	I0711 00:48:57.847230  177516 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem, removing ...
	I0711 00:48:57.847234  177516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem
	I0711 00:48:57.847265  177516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem (1123 bytes)
	I0711 00:48:57.847320  177516 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem, removing ...
	I0711 00:48:57.847323  177516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem
	I0711 00:48:57.847351  177516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem (1679 bytes)
	I0711 00:48:57.847396  177516 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-867467 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-867467]
	I0711 00:48:57.908814  177516 provision.go:172] copyRemoteCerts
	I0711 00:48:57.908878  177516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0711 00:48:57.908913  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:48:57.935813  177516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa Username:docker}
	I0711 00:48:58.030846  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0711 00:48:58.080783  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0711 00:48:58.117371  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0711 00:48:58.142388  177516 provision.go:86] duration metric: configureAuth took 318.642822ms
	I0711 00:48:58.142419  177516 ubuntu.go:193] setting minikube options for container-runtime
	I0711 00:48:58.142575  177516 config.go:182] Loaded profile config "cert-expiration-867467": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:48:58.142579  177516 machine.go:91] provisioned docker machine in 762.067326ms
	I0711 00:48:58.142583  177516 client.go:171] LocalClient.Create took 11.537632274s
	I0711 00:48:58.142600  177516 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-867467" took 11.537685632s
	I0711 00:48:58.142606  177516 start.go:300] post-start starting for "cert-expiration-867467" (driver="docker")
	I0711 00:48:58.142612  177516 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0711 00:48:58.142660  177516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0711 00:48:58.142688  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:48:58.160032  177516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa Username:docker}
	I0711 00:48:58.254250  177516 ssh_runner.go:195] Run: cat /etc/os-release
	I0711 00:48:58.259139  177516 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0711 00:48:58.259162  177516 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0711 00:48:58.259174  177516 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0711 00:48:58.259180  177516 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0711 00:48:58.259199  177516 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/addons for local assets ...
	I0711 00:48:58.259312  177516 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/files for local assets ...
	I0711 00:48:58.259410  177516 filesync.go:149] local asset: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem -> 101682.pem in /etc/ssl/certs
	I0711 00:48:58.259536  177516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0711 00:48:58.270188  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem --> /etc/ssl/certs/101682.pem (1708 bytes)
	I0711 00:48:58.292375  177516 start.go:303] post-start completed in 149.755964ms
	I0711 00:48:58.292754  177516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-867467
	I0711 00:48:58.313425  177516 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/config.json ...
	I0711 00:48:58.313915  177516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0711 00:48:58.313978  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:48:58.334119  177516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa Username:docker}
	I0711 00:48:58.428563  177516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0711 00:48:58.433691  177516 start.go:128] duration metric: createHost completed in 11.831198704s
	I0711 00:48:58.433712  177516 start.go:83] releasing machines lock for "cert-expiration-867467", held for 11.831348661s
	I0711 00:48:58.433804  177516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-867467
	I0711 00:48:58.452040  177516 ssh_runner.go:195] Run: cat /version.json
	I0711 00:48:58.452086  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:48:58.452118  177516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0711 00:48:58.452162  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:48:58.472797  177516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa Username:docker}
	I0711 00:48:58.473124  177516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa Username:docker}
	I0711 00:48:58.557790  177516 ssh_runner.go:195] Run: systemctl --version
	I0711 00:48:58.653291  177516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0711 00:48:58.658515  177516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0711 00:48:58.681913  177516 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0711 00:48:58.682000  177516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0711 00:48:58.710911  177516 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0711 00:48:58.710926  177516 start.go:466] detecting cgroup driver to use...
	I0711 00:48:58.710959  177516 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0711 00:48:58.711001  177516 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0711 00:48:58.723005  177516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0711 00:48:58.733954  177516 docker.go:196] disabling cri-docker service (if available) ...
	I0711 00:48:58.734033  177516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0711 00:48:58.746446  177516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0711 00:48:58.759094  177516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0711 00:48:58.832264  177516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0711 00:48:58.919193  177516 docker.go:212] disabling docker service ...
	I0711 00:48:58.919242  177516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0711 00:48:58.936966  177516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0711 00:48:58.948070  177516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0711 00:48:59.031612  177516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0711 00:48:59.115184  177516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0711 00:48:59.129457  177516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0711 00:48:59.147180  177516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0711 00:48:59.155844  177516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0711 00:48:59.164425  177516 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0711 00:48:59.164482  177516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0711 00:48:59.173983  177516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:48:59.184660  177516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0711 00:48:59.195492  177516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:48:59.204519  177516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0711 00:48:59.212568  177516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0711 00:48:59.221812  177516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0711 00:48:59.230682  177516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0711 00:48:59.239235  177516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0711 00:48:59.311803  177516 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0711 00:48:59.402511  177516 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0711 00:48:59.402578  177516 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0711 00:48:59.406705  177516 start.go:534] Will wait 60s for crictl version
	I0711 00:48:59.406751  177516 ssh_runner.go:195] Run: which crictl
	I0711 00:48:59.409941  177516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:48:59.446750  177516 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0711 00:48:59.446833  177516 ssh_runner.go:195] Run: containerd --version
	I0711 00:48:59.473112  177516 ssh_runner.go:195] Run: containerd --version
	I0711 00:48:59.504180  177516 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0711 00:48:56.421588  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0711 00:48:56.421629  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:01.100617  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:60162->192.168.76.2:8443: read: connection reset by peer
	I0711 00:49:01.100662  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:01.101069  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0711 00:48:59.506303  177516 cli_runner.go:164] Run: docker network inspect cert-expiration-867467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0711 00:48:59.528714  177516 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0711 00:48:59.532315  177516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0711 00:48:59.542158  177516 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0711 00:48:59.542215  177516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0711 00:48:59.578775  177516 containerd.go:604] all images are preloaded for containerd runtime.
	I0711 00:48:59.578785  177516 containerd.go:518] Images already preloaded, skipping extraction
	I0711 00:48:59.578825  177516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0711 00:48:59.615563  177516 containerd.go:604] all images are preloaded for containerd runtime.
	I0711 00:48:59.615585  177516 cache_images.go:84] Images are preloaded, skipping loading
	I0711 00:48:59.615689  177516 ssh_runner.go:195] Run: sudo crictl info
	I0711 00:48:59.650662  177516 cni.go:84] Creating CNI manager for ""
	I0711 00:48:59.650679  177516 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:48:59.650692  177516 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0711 00:48:59.650711  177516 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-867467 NodeName:cert-expiration-867467 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0711 00:48:59.650900  177516 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-expiration-867467"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0711 00:48:59.650976  177516 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cert-expiration-867467 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:cert-expiration-867467 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0711 00:48:59.651025  177516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0711 00:48:59.660764  177516 binaries.go:44] Found k8s binaries, skipping transfer
	I0711 00:48:59.660812  177516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0711 00:48:59.668879  177516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (395 bytes)
	I0711 00:48:59.685191  177516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0711 00:48:59.703743  177516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2114 bytes)
	I0711 00:48:59.722529  177516 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0711 00:48:59.725628  177516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0711 00:48:59.736640  177516 certs.go:56] Setting up /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467 for IP: 192.168.103.2
	I0711 00:48:59.736664  177516 certs.go:190] acquiring lock for shared ca certs: {Name:mka06d51c60707055e156951f7d4275743d01d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:48:59.736911  177516 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.key
	I0711 00:48:59.736948  177516 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.key
	I0711 00:48:59.737014  177516 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/client.key
	I0711 00:48:59.737029  177516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/client.crt with IP's: []
	I0711 00:48:59.889616  177516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/client.crt ...
	I0711 00:48:59.889632  177516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/client.crt: {Name:mk623e6375879a1032a4d021f49e6ecc287fa4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:48:59.889876  177516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/client.key ...
	I0711 00:48:59.889884  177516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/client.key: {Name:mk04a110eeb90826e8bc4bd4b3802378f02ede79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:48:59.889983  177516 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.key.33fce0b9
	I0711 00:48:59.889994  177516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.crt.33fce0b9 with IP's: [192.168.103.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0711 00:49:00.108135  177516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.crt.33fce0b9 ...
	I0711 00:49:00.108158  177516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.crt.33fce0b9: {Name:mk4ca43e16e6b75d9cbaa5e030717c368bf1406b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:00.108376  177516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.key.33fce0b9 ...
	I0711 00:49:00.108386  177516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.key.33fce0b9: {Name:mkb6d2df9ad26217b5898026203331696221bcf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:00.108474  177516 certs.go:337] copying /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.crt.33fce0b9 -> /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.crt
	I0711 00:49:00.108536  177516 certs.go:341] copying /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.key.33fce0b9 -> /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.key
	I0711 00:49:00.108582  177516 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.key
	I0711 00:49:00.108593  177516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.crt with IP's: []
	I0711 00:49:00.288576  177516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.crt ...
	I0711 00:49:00.288601  177516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.crt: {Name:mk9c5daafa3e0ee0b8322f924222995b6c1dcb88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:00.288813  177516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.key ...
	I0711 00:49:00.288821  177516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.key: {Name:mkbd9079b51d62d7e1b3e3ca93075226dfb8ed86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:00.289057  177516 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/10168.pem (1338 bytes)
	W0711 00:49:00.289113  177516 certs.go:433] ignoring /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/10168_empty.pem, impossibly tiny 0 bytes
	I0711 00:49:00.289124  177516 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem (1679 bytes)
	I0711 00:49:00.289150  177516 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem (1078 bytes)
	I0711 00:49:00.289173  177516 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem (1123 bytes)
	I0711 00:49:00.289193  177516 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem (1679 bytes)
	I0711 00:49:00.289247  177516 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem (1708 bytes)
	I0711 00:49:00.290035  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0711 00:49:00.317811  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0711 00:49:00.340260  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0711 00:49:00.362288  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/cert-expiration-867467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0711 00:49:00.384434  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0711 00:49:00.406429  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0711 00:49:00.488506  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0711 00:49:00.534280  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0711 00:49:00.559514  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0711 00:49:00.584835  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/certs/10168.pem --> /usr/share/ca-certificates/10168.pem (1338 bytes)
	I0711 00:49:00.611534  177516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem --> /usr/share/ca-certificates/101682.pem (1708 bytes)
	I0711 00:49:00.636273  177516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0711 00:49:00.654225  177516 ssh_runner.go:195] Run: openssl version
	I0711 00:49:00.660750  177516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0711 00:49:00.669113  177516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:49:00.672122  177516 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 11 00:19 /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:49:00.672167  177516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:49:00.678614  177516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0711 00:49:00.687445  177516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10168.pem && ln -fs /usr/share/ca-certificates/10168.pem /etc/ssl/certs/10168.pem"
	I0711 00:49:00.696222  177516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10168.pem
	I0711 00:49:00.699419  177516 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 11 00:24 /usr/share/ca-certificates/10168.pem
	I0711 00:49:00.699478  177516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10168.pem
	I0711 00:49:00.705713  177516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10168.pem /etc/ssl/certs/51391683.0"
	I0711 00:49:00.714502  177516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101682.pem && ln -fs /usr/share/ca-certificates/101682.pem /etc/ssl/certs/101682.pem"
	I0711 00:49:00.724665  177516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101682.pem
	I0711 00:49:00.728597  177516 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 11 00:24 /usr/share/ca-certificates/101682.pem
	I0711 00:49:00.728640  177516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101682.pem
	I0711 00:49:00.736122  177516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101682.pem /etc/ssl/certs/3ec20f2e.0"
	I0711 00:49:00.744724  177516 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0711 00:49:00.747626  177516 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0711 00:49:00.747675  177516 kubeadm.go:404] StartCluster: {Name:cert-expiration-867467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:cert-expiration-867467 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:49:00.747756  177516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0711 00:49:00.747795  177516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0711 00:49:00.784557  177516 cri.go:89] found id: ""
	I0711 00:49:00.784602  177516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0711 00:49:00.793314  177516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0711 00:49:00.800937  177516 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0711 00:49:00.800977  177516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0711 00:49:00.809549  177516 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0711 00:49:00.809577  177516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0711 00:49:00.859047  177516 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0711 00:49:00.859146  177516 kubeadm.go:322] [preflight] Running pre-flight checks
	I0711 00:49:00.898349  177516 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0711 00:49:00.898400  177516 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0711 00:49:00.898428  177516 kubeadm.go:322] OS: Linux
	I0711 00:49:00.898465  177516 kubeadm.go:322] CGROUPS_CPU: enabled
	I0711 00:49:00.898502  177516 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0711 00:49:00.898546  177516 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0711 00:49:00.898584  177516 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0711 00:49:00.898650  177516 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0711 00:49:00.898714  177516 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0711 00:49:00.898750  177516 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0711 00:49:00.898787  177516 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0711 00:49:00.898824  177516 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0711 00:49:00.964182  177516 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0711 00:49:00.964370  177516 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0711 00:49:00.964502  177516 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0711 00:49:01.178921  177516 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0711 00:49:01.299555  177516 out.go:204]   - Generating certificates and keys ...
	I0711 00:49:01.299766  177516 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0711 00:49:01.299978  177516 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0711 00:48:57.046011  179066 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0711 00:48:57.046254  179066 start.go:159] libmachine.API.Create for "force-systemd-flag-255349" (driver="docker")
	I0711 00:48:57.046281  179066 client.go:168] LocalClient.Create starting
	I0711 00:48:57.046358  179066 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem
	I0711 00:48:57.046394  179066 main.go:141] libmachine: Decoding PEM data...
	I0711 00:48:57.046411  179066 main.go:141] libmachine: Parsing certificate...
	I0711 00:48:57.046468  179066 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem
	I0711 00:48:57.046486  179066 main.go:141] libmachine: Decoding PEM data...
	I0711 00:48:57.046497  179066 main.go:141] libmachine: Parsing certificate...
	I0711 00:48:57.046786  179066 cli_runner.go:164] Run: docker network inspect force-systemd-flag-255349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0711 00:48:57.063976  179066 cli_runner.go:211] docker network inspect force-systemd-flag-255349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0711 00:48:57.064082  179066 network_create.go:281] running [docker network inspect force-systemd-flag-255349] to gather additional debugging logs...
	I0711 00:48:57.064111  179066 cli_runner.go:164] Run: docker network inspect force-systemd-flag-255349
	W0711 00:48:57.083182  179066 cli_runner.go:211] docker network inspect force-systemd-flag-255349 returned with exit code 1
	I0711 00:48:57.083216  179066 network_create.go:284] error running [docker network inspect force-systemd-flag-255349]: docker network inspect force-systemd-flag-255349: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-255349 not found
	I0711 00:48:57.083234  179066 network_create.go:286] output of [docker network inspect force-systemd-flag-255349]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-255349 not found
	
	** /stderr **
	I0711 00:48:57.083311  179066 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0711 00:48:57.102399  179066 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6f3ac6422f7d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:4d:e8:03:fa} reservation:<nil>}
	I0711 00:48:57.103233  179066 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-622643b4f5b5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:01:af:cb:50} reservation:<nil>}
	I0711 00:48:57.104793  179066 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ec9dcf881986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:8f:dc:2f:b9} reservation:<nil>}
	I0711 00:48:57.106395  179066 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d3b3149cf4d7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:29:3c:41:9a} reservation:<nil>}
	I0711 00:48:57.107204  179066 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78aec85996f5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:23:bf:43:b2} reservation:<nil>}
	I0711 00:48:57.107808  179066 network.go:214] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-9b92dc2d9678 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:e1:ec:76:6b} reservation:<nil>}
	I0711 00:48:57.108702  179066 network.go:214] skipping subnet 192.168.103.0/24 that is taken: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName:br-aa39a3e33e95 IfaceIPv4:192.168.103.1 IfaceMTU:1500 IfaceMAC:02:42:fd:d5:88:f6} reservation:<nil>}
	I0711 00:48:57.109971  179066 network.go:209] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00137acd0}
	I0711 00:48:57.110000  179066 network_create.go:123] attempt to create docker network force-systemd-flag-255349 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I0711 00:48:57.110057  179066 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-255349 force-systemd-flag-255349
	I0711 00:48:57.169155  179066 network_create.go:107] docker network force-systemd-flag-255349 192.168.112.0/24 created
	I0711 00:48:57.169196  179066 kic.go:117] calculated static IP "192.168.112.2" for the "force-systemd-flag-255349" container
	I0711 00:48:57.169278  179066 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0711 00:48:57.189118  179066 cli_runner.go:164] Run: docker volume create force-systemd-flag-255349 --label name.minikube.sigs.k8s.io=force-systemd-flag-255349 --label created_by.minikube.sigs.k8s.io=true
	I0711 00:48:57.216191  179066 oci.go:103] Successfully created a docker volume force-systemd-flag-255349
	I0711 00:48:57.216274  179066 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-255349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-255349 --entrypoint /usr/bin/test -v force-systemd-flag-255349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 -d /var/lib
	I0711 00:48:57.853684  179066 oci.go:107] Successfully prepared a docker volume force-systemd-flag-255349
	I0711 00:48:57.853746  179066 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0711 00:48:57.853774  179066 kic.go:190] Starting extracting preloaded images to volume ...
	I0711 00:48:57.853884  179066 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-255349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 -I lz4 -xf /preloaded.tar -C /extractDir
	I0711 00:49:01.458288  177516 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0711 00:49:01.591417  177516 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0711 00:49:01.865891  177516 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0711 00:49:02.073631  177516 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0711 00:49:02.226210  177516 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0711 00:49:02.226441  177516 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-867467 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0711 00:49:02.508452  177516 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0711 00:49:02.508633  177516 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-867467 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0711 00:49:02.734966  177516 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0711 00:49:02.783205  177516 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0711 00:49:02.858233  177516 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0711 00:49:02.858342  177516 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0711 00:49:03.046153  177516 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0711 00:49:03.235566  177516 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0711 00:49:03.325603  177516 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0711 00:49:03.475628  177516 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0711 00:49:03.492401  177516 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0711 00:49:03.494419  177516 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0711 00:49:03.494480  177516 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0711 00:49:03.630461  177516 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0711 00:49:03.318072  164676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:49:03.359663  164676 retry.go:31] will retry after 11.843050992s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0711 00:49:01.419772  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:01.420249  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0711 00:49:01.919893  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:01.920508  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0711 00:49:02.420138  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:02.420958  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0711 00:49:02.919582  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:02.920470  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0711 00:49:03.419988  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:03.420524  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0711 00:49:03.919679  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:03.632645  177516 out.go:204]   - Booting up control plane ...
	I0711 00:49:03.632755  177516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0711 00:49:03.634566  177516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0711 00:49:03.636217  177516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0711 00:49:03.637614  177516 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0711 00:49:03.640932  177516 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0711 00:49:03.020311  179066 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-255349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 -I lz4 -xf /preloaded.tar -C /extractDir: (5.166321121s)
	I0711 00:49:03.020368  179066 kic.go:199] duration metric: took 5.166585 seconds to extract preloaded images to volume
	W0711 00:49:03.020570  179066 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0711 00:49:03.020726  179066 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0711 00:49:03.083495  179066 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-255349 --name force-systemd-flag-255349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-255349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-255349 --network force-systemd-flag-255349 --ip 192.168.112.2 --volume force-systemd-flag-255349:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667
	I0711 00:49:03.403002  179066 cli_runner.go:164] Run: docker container inspect force-systemd-flag-255349 --format={{.State.Running}}
	I0711 00:49:03.425061  179066 cli_runner.go:164] Run: docker container inspect force-systemd-flag-255349 --format={{.State.Status}}
	I0711 00:49:03.444922  179066 cli_runner.go:164] Run: docker exec force-systemd-flag-255349 stat /var/lib/dpkg/alternatives/iptables
	I0711 00:49:03.504900  179066 oci.go:144] the created container "force-systemd-flag-255349" has a running status.
	I0711 00:49:03.504946  179066 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa...
	I0711 00:49:03.649345  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0711 00:49:03.649401  179066 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0711 00:49:03.671248  179066 cli_runner.go:164] Run: docker container inspect force-systemd-flag-255349 --format={{.State.Status}}
	I0711 00:49:03.694327  179066 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0711 00:49:03.694352  179066 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-255349 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0711 00:49:03.766883  179066 cli_runner.go:164] Run: docker container inspect force-systemd-flag-255349 --format={{.State.Status}}
	I0711 00:49:03.787186  179066 machine.go:88] provisioning docker machine ...
	I0711 00:49:03.787224  179066 ubuntu.go:169] provisioning hostname "force-systemd-flag-255349"
	I0711 00:49:03.787286  179066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-255349
	I0711 00:49:03.812850  179066 main.go:141] libmachine: Using SSH client type: native
	I0711 00:49:03.813612  179066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32987 <nil> <nil>}
	I0711 00:49:03.813630  179066 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-255349 && echo "force-systemd-flag-255349" | sudo tee /etc/hostname
	I0711 00:49:03.814432  179066 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42784->127.0.0.1:32987: read: connection reset by peer
	I0711 00:49:06.953475  179066 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-255349
	
	I0711 00:49:06.953582  179066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-255349
	I0711 00:49:06.972935  179066 main.go:141] libmachine: Using SSH client type: native
	I0711 00:49:06.973621  179066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32987 <nil> <nil>}
	I0711 00:49:06.973657  179066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-255349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-255349/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-255349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0711 00:49:07.097714  179066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0711 00:49:07.097743  179066 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15452-3381/.minikube CaCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15452-3381/.minikube}
	I0711 00:49:07.097764  179066 ubuntu.go:177] setting up certificates
	I0711 00:49:07.097773  179066 provision.go:83] configureAuth start
	I0711 00:49:07.097829  179066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-255349
	I0711 00:49:07.114753  179066 provision.go:138] copyHostCerts
	I0711 00:49:07.114808  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem
	I0711 00:49:07.114844  179066 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem, removing ...
	I0711 00:49:07.114855  179066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem
	I0711 00:49:07.114927  179066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/ca.pem (1078 bytes)
	I0711 00:49:07.115016  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem
	I0711 00:49:07.115041  179066 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem, removing ...
	I0711 00:49:07.115049  179066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem
	I0711 00:49:07.115089  179066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/cert.pem (1123 bytes)
	I0711 00:49:07.115149  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem
	I0711 00:49:07.115169  179066 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem, removing ...
	I0711 00:49:07.115174  179066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem
	I0711 00:49:07.115212  179066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15452-3381/.minikube/key.pem (1679 bytes)
	I0711 00:49:07.115295  179066 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-255349 san=[192.168.112.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-255349]
	I0711 00:49:07.382158  179066 provision.go:172] copyRemoteCerts
	I0711 00:49:07.382212  179066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0711 00:49:07.382253  179066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-255349
	I0711 00:49:07.401792  179066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32987 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa Username:docker}
	I0711 00:49:07.494767  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0711 00:49:07.494829  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0711 00:49:07.519890  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0711 00:49:07.519968  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0711 00:49:07.546145  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0711 00:49:07.546236  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0711 00:49:07.569391  179066 provision.go:86] duration metric: configureAuth took 471.607168ms
	I0711 00:49:07.569415  179066 ubuntu.go:193] setting minikube options for container-runtime
	I0711 00:49:07.569567  179066 config.go:182] Loaded profile config "force-systemd-flag-255349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:49:07.569577  179066 machine.go:91] provisioned docker machine in 3.782368104s
	I0711 00:49:07.569583  179066 client.go:171] LocalClient.Create took 10.523296545s
	I0711 00:49:07.569601  179066 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-255349" took 10.523346614s
	I0711 00:49:07.569607  179066 start.go:300] post-start starting for "force-systemd-flag-255349" (driver="docker")
	I0711 00:49:07.569619  179066 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0711 00:49:07.569668  179066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0711 00:49:07.569703  179066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-255349
	I0711 00:49:07.588863  179066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32987 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa Username:docker}
	I0711 00:49:07.685944  179066 ssh_runner.go:195] Run: cat /etc/os-release
	I0711 00:49:07.689577  179066 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0711 00:49:07.689631  179066 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0711 00:49:07.689645  179066 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0711 00:49:07.689655  179066 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0711 00:49:07.689673  179066 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/addons for local assets ...
	I0711 00:49:07.689766  179066 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-3381/.minikube/files for local assets ...
	I0711 00:49:07.689870  179066 filesync.go:149] local asset: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem -> 101682.pem in /etc/ssl/certs
	I0711 00:49:07.689900  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem -> /etc/ssl/certs/101682.pem
	I0711 00:49:07.690060  179066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0711 00:49:07.701353  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem --> /etc/ssl/certs/101682.pem (1708 bytes)
	I0711 00:49:07.723276  179066 start.go:303] post-start completed in 153.653682ms
	I0711 00:49:07.723608  179066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-255349
	I0711 00:49:07.738841  179066 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/config.json ...
	I0711 00:49:07.739094  179066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0711 00:49:07.739153  179066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-255349
	I0711 00:49:07.757662  179066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32987 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa Username:docker}
	I0711 00:49:07.843426  179066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0711 00:49:07.847308  179066 start.go:128] duration metric: createHost completed in 10.803389244s
	I0711 00:49:07.847331  179066 start.go:83] releasing machines lock for "force-systemd-flag-255349", held for 10.803551321s
	I0711 00:49:07.847388  179066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-255349
	I0711 00:49:07.865566  179066 ssh_runner.go:195] Run: cat /version.json
	I0711 00:49:07.865598  179066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0711 00:49:07.865651  179066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-255349
	I0711 00:49:07.865686  179066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-255349
	I0711 00:49:07.884863  179066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32987 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa Username:docker}
	I0711 00:49:07.885196  179066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32987 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/force-systemd-flag-255349/id_rsa Username:docker}
	I0711 00:49:08.060105  179066 ssh_runner.go:195] Run: systemctl --version
	I0711 00:49:08.064111  179066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0711 00:49:08.067809  179066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0711 00:49:08.090344  179066 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0711 00:49:08.090433  179066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0711 00:49:08.117271  179066 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0711 00:49:08.117296  179066 start.go:466] detecting cgroup driver to use...
	I0711 00:49:08.117310  179066 start.go:470] using "systemd" cgroup driver as enforced via flags
	I0711 00:49:08.117368  179066 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0711 00:49:08.129788  179066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0711 00:49:08.139424  179066 docker.go:196] disabling cri-docker service (if available) ...
	I0711 00:49:08.139483  179066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0711 00:49:08.151619  179066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0711 00:49:08.171345  179066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0711 00:49:08.253145  179066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0711 00:49:08.345350  179066 docker.go:212] disabling docker service ...
	I0711 00:49:08.345416  179066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0711 00:49:08.368458  179066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0711 00:49:08.382480  179066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0711 00:49:08.479580  179066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0711 00:49:08.572121  179066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0711 00:49:08.585315  179066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0711 00:49:08.604587  179066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0711 00:49:08.615365  179066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0711 00:49:08.625719  179066 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I0711 00:49:08.625807  179066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0711 00:49:08.639262  179066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:49:08.651166  179066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0711 00:49:08.661346  179066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0711 00:49:08.672400  179066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0711 00:49:08.681497  179066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0711 00:49:08.693928  179066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0711 00:49:08.705105  179066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0711 00:49:08.713003  179066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0711 00:49:08.787762  179066 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0711 00:49:08.874183  179066 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0711 00:49:08.874258  179066 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0711 00:49:08.878259  179066 start.go:534] Will wait 60s for crictl version
	I0711 00:49:08.878325  179066 ssh_runner.go:195] Run: which crictl
	I0711 00:49:08.881776  179066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:49:08.916494  179066 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0711 00:49:08.916570  179066 ssh_runner.go:195] Run: containerd --version
	I0711 00:49:08.938440  179066 ssh_runner.go:195] Run: containerd --version
	I0711 00:49:08.963873  179066 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0711 00:49:09.144367  177516 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.503029 seconds
	I0711 00:49:09.144534  177516 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0711 00:49:09.162081  177516 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0711 00:49:09.685623  177516 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0711 00:49:09.685861  177516 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-867467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0711 00:49:10.196913  177516 kubeadm.go:322] [bootstrap-token] Using token: q7zqhd.k7w9wu3vxpc8ohq9
	I0711 00:49:08.965084  179066 cli_runner.go:164] Run: docker network inspect force-systemd-flag-255349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0711 00:49:08.980622  179066 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0711 00:49:08.983922  179066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0711 00:49:08.994008  179066 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0711 00:49:08.994059  179066 ssh_runner.go:195] Run: sudo crictl images --output json
	I0711 00:49:09.023688  179066 containerd.go:604] all images are preloaded for containerd runtime.
	I0711 00:49:09.023704  179066 containerd.go:518] Images already preloaded, skipping extraction
	I0711 00:49:09.023755  179066 ssh_runner.go:195] Run: sudo crictl images --output json
	I0711 00:49:09.055688  179066 containerd.go:604] all images are preloaded for containerd runtime.
	I0711 00:49:09.055711  179066 cache_images.go:84] Images are preloaded, skipping loading
	I0711 00:49:09.055773  179066 ssh_runner.go:195] Run: sudo crictl info
	I0711 00:49:09.091768  179066 cni.go:84] Creating CNI manager for ""
	I0711 00:49:09.091801  179066 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:49:09.091824  179066 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0711 00:49:09.091849  179066 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-255349 NodeName:force-systemd-flag-255349 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0711 00:49:09.091986  179066 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-255349"
	  kubeletExtraArgs:
	    node-ip: 192.168.112.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0711 00:49:09.092056  179066 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=force-systemd-flag-255349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-255349 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0711 00:49:09.092120  179066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0711 00:49:09.102548  179066 binaries.go:44] Found k8s binaries, skipping transfer
	I0711 00:49:09.102696  179066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0711 00:49:09.112340  179066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (398 bytes)
	I0711 00:49:09.127892  179066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0711 00:49:09.146853  179066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2116 bytes)
	I0711 00:49:09.165611  179066 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I0711 00:49:09.169116  179066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0711 00:49:09.180341  179066 certs.go:56] Setting up /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349 for IP: 192.168.112.2
	I0711 00:49:09.180388  179066 certs.go:190] acquiring lock for shared ca certs: {Name:mka06d51c60707055e156951f7d4275743d01d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:09.180611  179066 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.key
	I0711 00:49:09.180657  179066 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.key
	I0711 00:49:09.180705  179066 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/client.key
	I0711 00:49:09.180754  179066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/client.crt with IP's: []
	I0711 00:49:09.346501  179066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/client.crt ...
	I0711 00:49:09.346531  179066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/client.crt: {Name:mke2bf60a86637eb4cf69db226c530920c7eb802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:09.346695  179066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/client.key ...
	I0711 00:49:09.346706  179066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/client.key: {Name:mk5780bde0075716d9f4961680440e8155ba52a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:09.346795  179066 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.key.9c554139
	I0711 00:49:09.346808  179066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.crt.9c554139 with IP's: [192.168.112.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0711 00:49:09.458019  179066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.crt.9c554139 ...
	I0711 00:49:09.458047  179066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.crt.9c554139: {Name:mk9220d1f5275d6a5aeaeacbb3a1a93feab15678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:09.458204  179066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.key.9c554139 ...
	I0711 00:49:09.458219  179066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.key.9c554139: {Name:mk7b03ae84c14dbcb0abd5f3f0c5927485cd4577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:09.458282  179066 certs.go:337] copying /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.crt.9c554139 -> /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.crt
	I0711 00:49:09.458341  179066 certs.go:341] copying /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.key.9c554139 -> /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.key
	I0711 00:49:09.458387  179066 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.key
	I0711 00:49:09.458404  179066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.crt with IP's: []
	I0711 00:49:09.528546  179066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.crt ...
	I0711 00:49:09.528584  179066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.crt: {Name:mkbfa2b22b89c07258f181ef8001259f623fa81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:09.528801  179066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.key ...
	I0711 00:49:09.528814  179066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.key: {Name:mke5ab4eb7f8b0cb369a5a80664061557d0bc209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:09.528890  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0711 00:49:09.528911  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0711 00:49:09.528925  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0711 00:49:09.528939  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0711 00:49:09.528952  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0711 00:49:09.528965  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0711 00:49:09.528979  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0711 00:49:09.528994  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0711 00:49:09.529055  179066 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/10168.pem (1338 bytes)
	W0711 00:49:09.529101  179066 certs.go:433] ignoring /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/10168_empty.pem, impossibly tiny 0 bytes
	I0711 00:49:09.529112  179066 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca-key.pem (1679 bytes)
	I0711 00:49:09.529139  179066 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/ca.pem (1078 bytes)
	I0711 00:49:09.529175  179066 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/cert.pem (1123 bytes)
	I0711 00:49:09.529199  179066 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/home/jenkins/minikube-integration/15452-3381/.minikube/certs/key.pem (1679 bytes)
	I0711 00:49:09.529241  179066 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem (1708 bytes)
	I0711 00:49:09.529268  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:49:09.529281  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/certs/10168.pem -> /usr/share/ca-certificates/10168.pem
	I0711 00:49:09.529294  179066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem -> /usr/share/ca-certificates/101682.pem
	I0711 00:49:09.530008  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0711 00:49:09.557921  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0711 00:49:09.584040  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0711 00:49:09.605707  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/force-systemd-flag-255349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0711 00:49:09.629322  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0711 00:49:09.655776  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0711 00:49:09.677476  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0711 00:49:09.702192  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0711 00:49:09.725043  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0711 00:49:09.751942  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/certs/10168.pem --> /usr/share/ca-certificates/10168.pem (1338 bytes)
	I0711 00:49:09.777349  179066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/ssl/certs/101682.pem --> /usr/share/ca-certificates/101682.pem (1708 bytes)
	I0711 00:49:09.799191  179066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0711 00:49:09.817661  179066 ssh_runner.go:195] Run: openssl version
	I0711 00:49:09.824382  179066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0711 00:49:09.834884  179066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:49:09.838528  179066 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 11 00:19 /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:49:09.838574  179066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0711 00:49:09.844840  179066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0711 00:49:09.853772  179066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10168.pem && ln -fs /usr/share/ca-certificates/10168.pem /etc/ssl/certs/10168.pem"
	I0711 00:49:09.862721  179066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10168.pem
	I0711 00:49:09.865767  179066 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 11 00:24 /usr/share/ca-certificates/10168.pem
	I0711 00:49:09.865820  179066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10168.pem
	I0711 00:49:09.872437  179066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10168.pem /etc/ssl/certs/51391683.0"
	I0711 00:49:09.881193  179066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101682.pem && ln -fs /usr/share/ca-certificates/101682.pem /etc/ssl/certs/101682.pem"
	I0711 00:49:09.889341  179066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101682.pem
	I0711 00:49:09.892300  179066 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 11 00:24 /usr/share/ca-certificates/101682.pem
	I0711 00:49:09.892348  179066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101682.pem
	I0711 00:49:09.898367  179066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101682.pem /etc/ssl/certs/3ec20f2e.0"
	I0711 00:49:09.906710  179066 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0711 00:49:09.909882  179066 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0711 00:49:09.909949  179066 kubeadm.go:404] StartCluster: {Name:force-systemd-flag-255349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-255349 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:49:09.910061  179066 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0711 00:49:09.910096  179066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0711 00:49:09.945410  179066 cri.go:89] found id: ""
	I0711 00:49:09.945504  179066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0711 00:49:09.955175  179066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0711 00:49:09.963492  179066 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0711 00:49:09.963538  179066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0711 00:49:09.971368  179066 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0711 00:49:09.971411  179066 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0711 00:49:10.015479  179066 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0711 00:49:10.015848  179066 kubeadm.go:322] [preflight] Running pre-flight checks
	I0711 00:49:10.052668  179066 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0711 00:49:10.052788  179066 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0711 00:49:10.052878  179066 kubeadm.go:322] OS: Linux
	I0711 00:49:10.052946  179066 kubeadm.go:322] CGROUPS_CPU: enabled
	I0711 00:49:10.053010  179066 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0711 00:49:10.053083  179066 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0711 00:49:10.053151  179066 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0711 00:49:10.053235  179066 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0711 00:49:10.053302  179066 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0711 00:49:10.053369  179066 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0711 00:49:10.053441  179066 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0711 00:49:10.053511  179066 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0711 00:49:10.123695  179066 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0711 00:49:10.123884  179066 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0711 00:49:10.124009  179066 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0711 00:49:10.344689  179066 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0711 00:49:10.198248  177516 out.go:204]   - Configuring RBAC rules ...
	I0711 00:49:10.198405  177516 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0711 00:49:10.202842  177516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0711 00:49:10.209408  177516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0711 00:49:10.212385  177516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0711 00:49:10.216431  177516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0711 00:49:10.219078  177516 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0711 00:49:10.232336  177516 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0711 00:49:10.455976  177516 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0711 00:49:10.613125  177516 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0711 00:49:10.613930  177516 kubeadm.go:322] 
	I0711 00:49:10.614019  177516 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0711 00:49:10.614025  177516 kubeadm.go:322] 
	I0711 00:49:10.614113  177516 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0711 00:49:10.614116  177516 kubeadm.go:322] 
	I0711 00:49:10.614137  177516 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0711 00:49:10.614205  177516 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0711 00:49:10.614246  177516 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0711 00:49:10.614249  177516 kubeadm.go:322] 
	I0711 00:49:10.614351  177516 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0711 00:49:10.614355  177516 kubeadm.go:322] 
	I0711 00:49:10.614402  177516 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0711 00:49:10.614407  177516 kubeadm.go:322] 
	I0711 00:49:10.614451  177516 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0711 00:49:10.614531  177516 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0711 00:49:10.614605  177516 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0711 00:49:10.614611  177516 kubeadm.go:322] 
	I0711 00:49:10.614698  177516 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0711 00:49:10.614784  177516 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0711 00:49:10.614789  177516 kubeadm.go:322] 
	I0711 00:49:10.614876  177516 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token q7zqhd.k7w9wu3vxpc8ohq9 \
	I0711 00:49:10.614983  177516 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:97bebf85545463e28e8c1fd71d5fccfa7beefe672ed16176ffafb3a239a4fc4f \
	I0711 00:49:10.615005  177516 kubeadm.go:322] 	--control-plane 
	I0711 00:49:10.615009  177516 kubeadm.go:322] 
	I0711 00:49:10.615104  177516 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0711 00:49:10.615110  177516 kubeadm.go:322] 
	I0711 00:49:10.615175  177516 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token q7zqhd.k7w9wu3vxpc8ohq9 \
	I0711 00:49:10.615268  177516 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:97bebf85545463e28e8c1fd71d5fccfa7beefe672ed16176ffafb3a239a4fc4f 
	I0711 00:49:10.619871  177516 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0711 00:49:10.619982  177516 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0711 00:49:10.620003  177516 cni.go:84] Creating CNI manager for ""
	I0711 00:49:10.620012  177516 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:49:10.622002  177516 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0711 00:49:08.920000  170338 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0711 00:49:08.920037  170338 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0711 00:49:10.623369  177516 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0711 00:49:10.674023  177516 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0711 00:49:10.674037  177516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0711 00:49:10.696310  177516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0711 00:49:11.479414  177516 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0711 00:49:11.479531  177516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=72491f7d3796d9f0aa01d4c526b07206f092e604 minikube.k8s.io/name=cert-expiration-867467 minikube.k8s.io/updated_at=2023_07_11T00_49_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0711 00:49:11.479533  177516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0711 00:49:11.583554  177516 kubeadm.go:1081] duration metric: took 104.081047ms to wait for elevateKubeSystemPrivileges.
	I0711 00:49:11.583638  177516 ops.go:34] apiserver oom_adj: -16
	I0711 00:49:11.604781  177516 kubeadm.go:406] StartCluster complete in 10.857102655s
	I0711 00:49:11.604813  177516 settings.go:142] acquiring lock: {Name:mk292abf46436ce17435480484ca010f83f19dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:11.604888  177516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:49:11.606415  177516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-3381/kubeconfig: {Name:mk7a4dda1ca27c23b8e4a4d2dab8f3cedddd8401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0711 00:49:11.606646  177516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0711 00:49:11.606830  177516 config.go:182] Loaded profile config "cert-expiration-867467": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:49:11.606796  177516 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0711 00:49:11.606890  177516 addons.go:66] Setting storage-provisioner=true in profile "cert-expiration-867467"
	I0711 00:49:11.606907  177516 addons.go:228] Setting addon storage-provisioner=true in "cert-expiration-867467"
	I0711 00:49:11.606906  177516 addons.go:66] Setting default-storageclass=true in profile "cert-expiration-867467"
	I0711 00:49:11.606921  177516 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-867467"
	I0711 00:49:11.606963  177516 host.go:66] Checking if "cert-expiration-867467" exists ...
	I0711 00:49:11.607281  177516 cli_runner.go:164] Run: docker container inspect cert-expiration-867467 --format={{.State.Status}}
	I0711 00:49:11.607496  177516 cli_runner.go:164] Run: docker container inspect cert-expiration-867467 --format={{.State.Status}}
	I0711 00:49:11.632156  177516 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0711 00:49:10.348103  179066 out.go:204]   - Generating certificates and keys ...
	I0711 00:49:10.348291  179066 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0711 00:49:10.348410  179066 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0711 00:49:10.550479  179066 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0711 00:49:10.894615  179066 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0711 00:49:11.130535  179066 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0711 00:49:11.239278  179066 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0711 00:49:11.442768  179066 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0711 00:49:11.443013  179066 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-255349 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I0711 00:49:11.610092  179066 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0711 00:49:11.610270  179066 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-255349 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I0711 00:49:11.636538  177516 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0711 00:49:11.636548  177516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0711 00:49:11.636601  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:49:11.639552  177516 addons.go:228] Setting addon default-storageclass=true in "cert-expiration-867467"
	I0711 00:49:11.639582  177516 host.go:66] Checking if "cert-expiration-867467" exists ...
	I0711 00:49:11.640041  177516 cli_runner.go:164] Run: docker container inspect cert-expiration-867467 --format={{.State.Status}}
	I0711 00:49:11.657099  177516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa Username:docker}
	I0711 00:49:11.659122  177516 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0711 00:49:11.659130  177516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0711 00:49:11.659167  177516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-867467
	I0711 00:49:11.679525  177516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/cert-expiration-867467/id_rsa Username:docker}
	I0711 00:49:11.723185  177516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0711 00:49:11.790297  177516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0711 00:49:11.790512  177516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0711 00:49:12.176144  177516 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-867467" context rescaled to 1 replicas
	I0711 00:49:12.176205  177516 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0711 00:49:12.179409  177516 out.go:177] * Verifying Kubernetes components...
	I0711 00:49:12.181008  177516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0711 00:49:12.310117  177516 start.go:901] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0711 00:49:12.536441  177516 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0711 00:49:12.538428  177516 addons.go:499] enable addons completed in 931.631191ms: enabled=[storage-provisioner default-storageclass]
	I0711 00:49:12.535781  177516 api_server.go:52] waiting for apiserver process to appear ...
	I0711 00:49:12.538516  177516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0711 00:49:12.550235  177516 api_server.go:72] duration metric: took 373.979143ms to wait for apiserver process to appear ...
	I0711 00:49:12.550252  177516 api_server.go:88] waiting for apiserver healthz status ...
	I0711 00:49:12.550268  177516 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0711 00:49:12.555323  177516 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0711 00:49:12.556329  177516 api_server.go:141] control plane version: v1.27.3
	I0711 00:49:12.556346  177516 api_server.go:131] duration metric: took 6.089492ms to wait for apiserver health ...
	I0711 00:49:12.556352  177516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0711 00:49:12.561957  177516 system_pods.go:59] 5 kube-system pods found
	I0711 00:49:12.562004  177516 system_pods.go:61] "etcd-cert-expiration-867467" [ccd3ec96-d13e-4b8c-bb5d-7c5fe71c47d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0711 00:49:12.562016  177516 system_pods.go:61] "kube-apiserver-cert-expiration-867467" [d3f3d892-dea2-4edf-9a02-6f2c866a10ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0711 00:49:12.562024  177516 system_pods.go:61] "kube-controller-manager-cert-expiration-867467" [49d64f46-a962-4e69-a051-fe168b0bd2d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0711 00:49:12.562033  177516 system_pods.go:61] "kube-scheduler-cert-expiration-867467" [7cd6d6e1-c348-4485-9255-965af98bf576] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0711 00:49:12.562040  177516 system_pods.go:61] "storage-provisioner" [39a82179-49e7-4616-b409-e363c4a94ed7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0711 00:49:12.562046  177516 system_pods.go:74] duration metric: took 5.69053ms to wait for pod list to return data ...
	I0711 00:49:12.562055  177516 kubeadm.go:581] duration metric: took 385.807468ms to wait for : map[apiserver:true system_pods:true] ...
	I0711 00:49:12.562067  177516 node_conditions.go:102] verifying NodePressure condition ...
	I0711 00:49:12.564726  177516 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0711 00:49:12.564740  177516 node_conditions.go:123] node cpu capacity is 8
	I0711 00:49:12.564752  177516 node_conditions.go:105] duration metric: took 2.681666ms to run NodePressure ...
	I0711 00:49:12.564765  177516 start.go:228] waiting for startup goroutines ...
	I0711 00:49:12.564773  177516 start.go:233] waiting for cluster config update ...
	I0711 00:49:12.564783  177516 start.go:242] writing updated cluster config ...
	I0711 00:49:12.565184  177516 ssh_runner.go:195] Run: rm -f paused
	I0711 00:49:12.618474  177516 start.go:642] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0711 00:49:12.621050  177516 out.go:177] * Done! kubectl is now configured to use "cert-expiration-867467" cluster and "default" namespace by default
	I0711 00:49:11.844402  179066 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0711 00:49:11.956702  179066 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0711 00:49:12.210987  179066 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0711 00:49:12.211110  179066 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0711 00:49:12.442459  179066 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0711 00:49:12.520787  179066 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0711 00:49:12.688366  179066 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0711 00:49:12.828641  179066 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0711 00:49:12.841508  179066 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0711 00:49:12.843000  179066 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0711 00:49:12.843114  179066 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0711 00:49:12.926316  179066 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0711 00:49:15.203345  164676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0711 00:49:15.234101  164676 out.go:177] 
	W0711 00:49:15.235449  164676 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0711 00:49:15.235464  164676 out.go:239] * 
	W0711 00:49:15.236240  164676 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0711 00:49:15.238021  164676 out.go:177] 
	
	* 
	* ==> container status <==
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2023-07-11 00:48:25 UTC, end at Tue 2023-07-11 00:49:16 UTC. --
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.598690845Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.598764043Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.598835976Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.598916766Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.599040308Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.599153016Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600039636Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600141345Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600243459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600282229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600307407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600328490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600359646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600383337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600414896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600433509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600450494Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600532014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600553582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600599368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.600619272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.601076159Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.601124243Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jul 11 00:48:28 missing-upgrade-576591 containerd[638]: time="2023-07-11T00:48:28.601188276Z" level=info msg="containerd successfully booted in 0.056653s"
	Jul 11 00:48:28 missing-upgrade-576591 systemd[1]: Started containerd container runtime.
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[  +4.191602] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-622643b4f5b5
	[  +0.000030] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[  +8.191109] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-622643b4f5b5
	[  +0.000005] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[Jul11 00:40] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-622643b4f5b5
	[  +0.000013] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[  +1.001624] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-622643b4f5b5
	[  +0.000006] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[  +2.015764] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-622643b4f5b5
	[  +0.000019] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[  +4.159624] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-622643b4f5b5
	[  +0.000022] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[  +8.191186] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-622643b4f5b5
	[  +0.000006] ll header: 00000000: 02 42 01 af cb 50 02 42 c0 a8 3a 02 08 00
	[Jul11 00:43] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5bf06dd15f46
	[  +0.000005] ll header: 00000000: 02 42 86 3c ae eb 02 42 c0 a8 43 02 08 00
	[  +1.026929] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5bf06dd15f46
	[  +0.000008] ll header: 00000000: 02 42 86 3c ae eb 02 42 c0 a8 43 02 08 00
	[  +2.015842] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5bf06dd15f46
	[  +0.000029] ll header: 00000000: 02 42 86 3c ae eb 02 42 c0 a8 43 02 08 00
	[  +4.067573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5bf06dd15f46
	[  +0.000008] ll header: 00000000: 02 42 86 3c ae eb 02 42 c0 a8 43 02 08 00
	[  +8.187136] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5bf06dd15f46
	[  +0.000006] ll header: 00000000: 02 42 86 3c ae eb 02 42 c0 a8 43 02 08 00
	
	* 
	* ==> kernel <==
	*  00:49:16 up 31 min,  0 users,  load average: 9.22, 4.89, 2.33
	Linux missing-upgrade-576591 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-07-11 00:48:25 UTC, end at Tue 2023-07-11 00:49:16 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0711 00:49:15.889507  183573 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:15.917724  183573 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:15.943983  183573 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:15.971932  183573 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:15.998689  183573 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:15Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:16.029512  183573 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:16Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:16.055556  183573 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:16Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:16.083655  183573 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:16Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0711 00:49:16.178670  183573 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-11T00:49:16Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-07-11T00:49:16Z\" level=fatal msg=\"listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0711 00:49:16.279925  183573 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p missing-upgrade-576591 -n missing-upgrade-576591
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p missing-upgrade-576591 -n missing-upgrade-576591: exit status 2 (297.029404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "missing-upgrade-576591" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-576591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-576591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-576591: (2.285003263s)
--- FAIL: TestMissingContainerUpgrade (141.82s)

                                                
                                    

Test pass (279/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.04
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.3/json-events 4.02
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.21
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
18 TestDownloadOnlyKic 1.22
19 TestBinaryMirror 0.73
20 TestOffline 71.33
22 TestAddons/Setup 110.71
24 TestAddons/parallel/Registry 13.35
25 TestAddons/parallel/Ingress 20.09
26 TestAddons/parallel/InspektorGadget 10.5
27 TestAddons/parallel/MetricsServer 5.47
28 TestAddons/parallel/HelmTiller 9.96
30 TestAddons/parallel/CSI 68.92
31 TestAddons/parallel/Headlamp 10.11
32 TestAddons/parallel/CloudSpanner 5.32
35 TestAddons/serial/GCPAuth/Namespaces 0.13
36 TestAddons/StoppedEnableDisable 12.09
37 TestCertOptions 26.63
38 TestCertExpiration 226.05
40 TestForceSystemdFlag 29.5
41 TestForceSystemdEnv 42.95
43 TestKVMDriverInstallOrUpdate 2.9
47 TestErrorSpam/setup 21.29
48 TestErrorSpam/start 0.58
49 TestErrorSpam/status 0.87
50 TestErrorSpam/pause 1.5
51 TestErrorSpam/unpause 1.52
52 TestErrorSpam/stop 1.36
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 48.77
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 13.2
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.06
64 TestFunctional/serial/CacheCmd/cache/add_local 1.38
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
72 TestFunctional/serial/ExtraConfig 57.2
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.42
75 TestFunctional/serial/LogsFileCmd 1.42
76 TestFunctional/serial/InvalidService 4.48
78 TestFunctional/parallel/ConfigCmd 0.37
79 TestFunctional/parallel/DashboardCmd 15.51
80 TestFunctional/parallel/DryRun 0.5
81 TestFunctional/parallel/InternationalLanguage 0.2
82 TestFunctional/parallel/StatusCmd 1.02
86 TestFunctional/parallel/ServiceCmdConnect 9.56
87 TestFunctional/parallel/AddonsCmd 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 34.31
90 TestFunctional/parallel/SSHCmd 0.59
91 TestFunctional/parallel/CpCmd 1.28
92 TestFunctional/parallel/MySQL 26.38
93 TestFunctional/parallel/FileSync 0.25
94 TestFunctional/parallel/CertSync 1.76
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
102 TestFunctional/parallel/License 0.17
103 TestFunctional/parallel/ServiceCmd/DeployApp 8.26
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.38
109 TestFunctional/parallel/ServiceCmd/List 0.55
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
112 TestFunctional/parallel/ServiceCmd/Format 0.37
113 TestFunctional/parallel/ServiceCmd/URL 0.36
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.96
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.87
127 TestFunctional/parallel/ImageCommands/Setup 0.94
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
129 TestFunctional/parallel/ProfileCmd/profile_list 0.36
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.11
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
132 TestFunctional/parallel/MountCmd/any-port 7.24
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.74
137 TestFunctional/parallel/MountCmd/specific-port 2.27
138 TestFunctional/parallel/MountCmd/VerifyCleanup 1.16
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.1
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.99
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.92
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.82
144 TestFunctional/delete_addon-resizer_images 0.08
145 TestFunctional/delete_my-image_image 0.01
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 71.28
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.69
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.36
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.69
157 TestJSONOutput/start/Command 80.52
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.66
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.61
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.74
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.2
182 TestKicCustomNetwork/create_custom_network 33.92
183 TestKicCustomNetwork/use_default_bridge_network 24.48
184 TestKicExistingNetwork 26.06
185 TestKicCustomSubnet 25.85
186 TestKicStaticIP 26.52
187 TestMainNoArgs 0.05
188 TestMinikubeProfile 49.77
191 TestMountStart/serial/StartWithMountFirst 4.86
192 TestMountStart/serial/VerifyMountFirst 0.25
193 TestMountStart/serial/StartWithMountSecond 4.97
194 TestMountStart/serial/VerifyMountSecond 0.23
195 TestMountStart/serial/DeleteFirst 1.63
196 TestMountStart/serial/VerifyMountPostDelete 0.24
197 TestMountStart/serial/Stop 1.2
198 TestMountStart/serial/RestartStopped 6.68
199 TestMountStart/serial/VerifyMountPostStop 0.25
202 TestMultiNode/serial/FreshStart2Nodes 105.71
203 TestMultiNode/serial/DeployApp2Nodes 29.49
204 TestMultiNode/serial/PingHostFrom2Pods 0.81
205 TestMultiNode/serial/AddNode 14.96
206 TestMultiNode/serial/ProfileList 0.28
207 TestMultiNode/serial/CopyFile 8.95
208 TestMultiNode/serial/StopNode 2.13
209 TestMultiNode/serial/StartAfterStop 10.44
210 TestMultiNode/serial/RestartKeepsNodes 132.26
211 TestMultiNode/serial/DeleteNode 4.72
212 TestMultiNode/serial/StopMultiNode 23.91
213 TestMultiNode/serial/RestartMultiNode 88.43
214 TestMultiNode/serial/ValidateNameConflict 26.21
219 TestPreload 154.43
221 TestScheduledStopUnix 96.89
224 TestInsufficientStorage 12.94
225 TestRunningBinaryUpgrade 79.26
227 TestKubernetesUpgrade 376.77
230 TestStoppedBinaryUpgrade/Setup 0.72
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
232 TestNoKubernetes/serial/StartWithK8s 40.76
233 TestStoppedBinaryUpgrade/Upgrade 137.45
234 TestNoKubernetes/serial/StartWithStopK8s 20.72
235 TestNoKubernetes/serial/Start 7.49
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
237 TestNoKubernetes/serial/ProfileList 3.17
238 TestNoKubernetes/serial/Stop 2.39
239 TestNoKubernetes/serial/StartNoArgs 6.29
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
248 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
256 TestNetworkPlugins/group/false 4.01
261 TestPause/serial/Start 57.8
263 TestStartStop/group/old-k8s-version/serial/FirstStart 114.8
264 TestPause/serial/SecondStartNoReconfiguration 15.81
265 TestPause/serial/Pause 0.69
266 TestPause/serial/VerifyStatus 0.31
267 TestPause/serial/Unpause 0.64
268 TestPause/serial/PauseAgain 0.83
269 TestPause/serial/DeletePaused 2.54
270 TestPause/serial/VerifyDeletedResources 0.61
272 TestStartStop/group/no-preload/serial/FirstStart 57.67
273 TestStartStop/group/old-k8s-version/serial/DeployApp 8.44
274 TestStartStop/group/no-preload/serial/DeployApp 8.46
275 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.66
276 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
277 TestStartStop/group/old-k8s-version/serial/Stop 12.13
278 TestStartStop/group/no-preload/serial/Stop 12.1
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
280 TestStartStop/group/old-k8s-version/serial/SecondStart 658.28
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
282 TestStartStop/group/no-preload/serial/SecondStart 330.3
284 TestStartStop/group/embed-certs/serial/FirstStart 55.06
285 TestStartStop/group/embed-certs/serial/DeployApp 7.43
286 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
287 TestStartStop/group/embed-certs/serial/Stop 13.21
289 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.57
290 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
291 TestStartStop/group/embed-certs/serial/SecondStart 316.53
292 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.42
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.78
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
296 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 583.49
297 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
298 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
299 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
300 TestStartStop/group/no-preload/serial/Pause 2.8
302 TestStartStop/group/newest-cni/serial/FirstStart 37.05
303 TestStartStop/group/newest-cni/serial/DeployApp 0
304 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.81
305 TestStartStop/group/newest-cni/serial/Stop 1.2
306 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
307 TestStartStop/group/newest-cni/serial/SecondStart 37.87
308 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.02
309 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
312 TestStartStop/group/newest-cni/serial/Pause 2.76
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
314 TestNetworkPlugins/group/auto/Start 52.02
315 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
316 TestStartStop/group/embed-certs/serial/Pause 3.07
317 TestNetworkPlugins/group/kindnet/Start 50.88
318 TestNetworkPlugins/group/auto/KubeletFlags 0.27
319 TestNetworkPlugins/group/auto/NetCatPod 8.32
320 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
321 TestNetworkPlugins/group/auto/DNS 0.17
322 TestNetworkPlugins/group/auto/Localhost 0.15
323 TestNetworkPlugins/group/auto/HairPin 0.15
324 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
325 TestNetworkPlugins/group/kindnet/NetCatPod 9.32
326 TestNetworkPlugins/group/kindnet/DNS 0.17
327 TestNetworkPlugins/group/kindnet/Localhost 0.13
328 TestNetworkPlugins/group/kindnet/HairPin 0.16
329 TestNetworkPlugins/group/calico/Start 60.67
330 TestNetworkPlugins/group/custom-flannel/Start 55.24
331 TestNetworkPlugins/group/calico/ControllerPod 5.02
332 TestNetworkPlugins/group/calico/KubeletFlags 0.29
333 TestNetworkPlugins/group/calico/NetCatPod 9.35
334 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
335 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.3
336 TestNetworkPlugins/group/calico/DNS 0.15
337 TestNetworkPlugins/group/calico/Localhost 0.14
338 TestNetworkPlugins/group/calico/HairPin 0.13
339 TestNetworkPlugins/group/custom-flannel/DNS 0.15
340 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
341 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
342 TestNetworkPlugins/group/enable-default-cni/Start 79.52
343 TestNetworkPlugins/group/flannel/Start 60.46
344 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
345 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
346 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
347 TestStartStop/group/old-k8s-version/serial/Pause 2.74
348 TestNetworkPlugins/group/bridge/Start 42.36
349 TestNetworkPlugins/group/flannel/ControllerPod 5.02
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
351 TestNetworkPlugins/group/flannel/NetCatPod 8.32
352 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
353 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.39
354 TestNetworkPlugins/group/flannel/DNS 0.25
355 TestNetworkPlugins/group/flannel/Localhost 0.19
356 TestNetworkPlugins/group/flannel/HairPin 0.19
357 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
358 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
359 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
360 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
361 TestNetworkPlugins/group/bridge/NetCatPod 9.34
362 TestNetworkPlugins/group/bridge/DNS 0.14
363 TestNetworkPlugins/group/bridge/Localhost 0.13
364 TestNetworkPlugins/group/bridge/HairPin 0.13
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
366 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
x
+
TestDownloadOnly/v1.16.0/json-events (6.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-278045 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-278045 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.040975996s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-278045
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-278045: exit status 85 (58.404046ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-278045 | jenkins | v1.30.1 | 11 Jul 23 00:19 UTC |          |
	|         | -p download-only-278045        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/11 00:19:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0711 00:19:15.603418   10179 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:19:15.603554   10179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:19:15.603565   10179 out.go:309] Setting ErrFile to fd 2...
	I0711 00:19:15.603572   10179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:19:15.603683   10179 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	W0711 00:19:15.603824   10179 root.go:313] Error reading config file at /home/jenkins/minikube-integration/15452-3381/.minikube/config/config.json: open /home/jenkins/minikube-integration/15452-3381/.minikube/config/config.json: no such file or directory
	I0711 00:19:15.604386   10179 out.go:303] Setting JSON to true
	I0711 00:19:15.605202   10179 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":108,"bootTime":1689034648,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:19:15.605261   10179 start.go:137] virtualization: kvm guest
	I0711 00:19:15.608212   10179 out.go:97] [download-only-278045] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0711 00:19:15.610287   10179 out.go:169] MINIKUBE_LOCATION=15452
	W0711 00:19:15.608326   10179 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball: no such file or directory
	I0711 00:19:15.608366   10179 notify.go:220] Checking for updates...
	I0711 00:19:15.613538   10179 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:19:15.615214   10179 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:19:15.616588   10179 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:19:15.618943   10179 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0711 00:19:15.621362   10179 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0711 00:19:15.621575   10179 driver.go:373] Setting default libvirt URI to qemu:///system
	I0711 00:19:15.642994   10179 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0711 00:19:15.643068   10179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:19:15.996795   10179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-11 00:19:15.988175371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:19:15.996894   10179 docker.go:294] overlay module found
	I0711 00:19:15.998769   10179 out.go:97] Using the docker driver based on user configuration
	I0711 00:19:15.998788   10179 start.go:297] selected driver: docker
	I0711 00:19:15.998792   10179 start.go:944] validating driver "docker" against <nil>
	I0711 00:19:15.998866   10179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:19:16.053662   10179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-11 00:19:16.044085925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:19:16.053920   10179 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0711 00:19:16.054587   10179 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0711 00:19:16.054778   10179 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0711 00:19:16.057568   10179 out.go:169] Using Docker driver with root privileges
	I0711 00:19:16.059160   10179 cni.go:84] Creating CNI manager for ""
	I0711 00:19:16.059193   10179 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0711 00:19:16.059203   10179 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0711 00:19:16.059217   10179 start_flags.go:319] config:
	{Name:download-only-278045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-278045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:19:16.060992   10179 out.go:97] Starting control plane node download-only-278045 in cluster download-only-278045
	I0711 00:19:16.061012   10179 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0711 00:19:16.062496   10179 out.go:97] Pulling base image ...
	I0711 00:19:16.062528   10179 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0711 00:19:16.062623   10179 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 in local docker daemon
	I0711 00:19:16.081045   10179 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 to local cache
	I0711 00:19:16.081211   10179 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 in local cache directory
	I0711 00:19:16.081296   10179 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 to local cache
	I0711 00:19:16.109045   10179 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0711 00:19:16.109070   10179 cache.go:57] Caching tarball of preloaded images
	I0711 00:19:16.109223   10179 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0711 00:19:16.111939   10179 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0711 00:19:16.111958   10179 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0711 00:19:16.139494   10179 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0711 00:19:18.990490   10179 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 as a tarball
	I0711 00:19:20.259695   10179 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0711 00:19:20.259783   10179 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15452-3381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-278045"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (4.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-278045 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-278045 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.018471155s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (4.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-278045
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-278045: exit status 85 (52.213483ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-278045 | jenkins | v1.30.1 | 11 Jul 23 00:19 UTC |          |
	|         | -p download-only-278045        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-278045 | jenkins | v1.30.1 | 11 Jul 23 00:19 UTC |          |
	|         | -p download-only-278045        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/11 00:19:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0711 00:19:21.715833   10335 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:19:21.716016   10335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:19:21.716025   10335 out.go:309] Setting ErrFile to fd 2...
	I0711 00:19:21.716030   10335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:19:21.716170   10335 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	W0711 00:19:21.716323   10335 root.go:313] Error reading config file at /home/jenkins/minikube-integration/15452-3381/.minikube/config/config.json: open /home/jenkins/minikube-integration/15452-3381/.minikube/config/config.json: no such file or directory
	I0711 00:19:21.716889   10335 out.go:303] Setting JSON to true
	I0711 00:19:21.717809   10335 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":114,"bootTime":1689034648,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:19:21.717879   10335 start.go:137] virtualization: kvm guest
	I0711 00:19:21.721398   10335 out.go:97] [download-only-278045] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0711 00:19:21.723653   10335 out.go:169] MINIKUBE_LOCATION=15452
	I0711 00:19:21.721526   10335 notify.go:220] Checking for updates...
	I0711 00:19:21.728713   10335 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:19:21.730729   10335 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:19:21.732017   10335 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:19:21.733380   10335 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-278045"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-278045
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.22s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-595423 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-595423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-595423
--- PASS: TestDownloadOnlyKic (1.22s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-294553 --alsologtostderr --binary-mirror http://127.0.0.1:45965 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-294553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-294553
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (71.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-476418 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-476418 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m9.048923827s)
helpers_test.go:175: Cleaning up "offline-containerd-476418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-476418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-476418: (2.278091019s)
--- PASS: TestOffline (71.33s)

                                                
                                    
x
+
TestAddons/Setup (110.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-906872 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-906872 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m50.71372636s)
--- PASS: TestAddons/Setup (110.71s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 16.325975ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cwd5p" [3b4a25f9-7306-4b77-ac89-72d4e8df7d93] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008793049s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7gwcn" [2957d019-65d9-409c-81d1-986eb7886128] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010142005s
addons_test.go:316: (dbg) Run:  kubectl --context addons-906872 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-906872 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-906872 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.545200056s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 ip
2023/07/11 00:21:31 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.35s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-906872 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-906872 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-906872 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [25de70a8-67d0-4854-ae2c-c3f2a3c92ad4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [25de70a8-67d0-4854-ae2c-c3f2a3c92ad4] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.008673209s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-906872 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-906872 addons disable ingress-dns --alsologtostderr -v=1: (1.609542749s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-906872 addons disable ingress --alsologtostderr -v=1: (7.457895623s)
--- PASS: TestAddons/parallel/Ingress (20.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5nkrt" [2c783742-125b-4470-ab3e-66d31ec1b2e6] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.060200644s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-906872
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-906872: (5.434689537s)
--- PASS: TestAddons/parallel/InspektorGadget (10.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.11164ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-gt4c2" [8a655146-a4a1-4578-aaa2-7cca6eb42b45] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010464703s
addons_test.go:391: (dbg) Run:  kubectl --context addons-906872 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.96s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 15.472866ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-4fxlm" [c42ac06e-c6be-4c5a-b787-780c4936f98d] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009478974s
addons_test.go:449: (dbg) Run:  kubectl --context addons-906872 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-906872 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.645307179s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 7.505381ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-906872 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-906872 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [01c56d60-e490-4f4f-af19-12d0db9f35ac] Pending
helpers_test.go:344: "task-pv-pod" [01c56d60-e490-4f4f-af19-12d0db9f35ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [01c56d60-e490-4f4f-af19-12d0db9f35ac] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.007895109s
addons_test.go:560: (dbg) Run:  kubectl --context addons-906872 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-906872 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-906872 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-906872 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-906872 delete pod task-pv-pod: (1.245333703s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-906872 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-906872 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-906872 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8419f9d0-bad2-495e-9774-e8f0f1f8493a] Pending
helpers_test.go:344: "task-pv-pod-restore" [8419f9d0-bad2-495e-9774-e8f0f1f8493a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8419f9d0-bad2-495e-9774-e8f0f1f8493a] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00806925s
addons_test.go:602: (dbg) Run:  kubectl --context addons-906872 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-906872 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-906872 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-906872 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.372165185s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-906872 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-906872 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-906872 --alsologtostderr -v=1: (1.063723331s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-wkmb9" [94e43cae-28f4-4506-be83-5a3267384bf4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-wkmb9" [94e43cae-28f4-4506-be83-5a3267384bf4] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.048934143s
--- PASS: TestAddons/parallel/Headlamp (10.11s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-hl466" [8e36bc95-f39e-4270-aa94-02131aba8e10] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006357664s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-906872
--- PASS: TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-906872 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-906872 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-906872
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-906872: (11.911491546s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-906872
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-906872
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-906872
--- PASS: TestAddons/StoppedEnableDisable (12.09s)

                                                
                                    
x
+
TestCertOptions (26.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-966489 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-966489 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.001487551s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-966489 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-966489 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-966489 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-966489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-966489
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-966489: (2.038565964s)
--- PASS: TestCertOptions (26.63s)

                                                
                                    
x
+
TestCertExpiration (226.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-867467 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-867467 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.366509608s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-867467 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-867467 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (15.25040049s)
helpers_test.go:175: Cleaning up "cert-expiration-867467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-867467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-867467: (4.435009754s)
--- PASS: TestCertExpiration (226.05s)

                                                
                                    
x
+
TestForceSystemdFlag (29.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-255349 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-255349 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.878627074s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-255349 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-255349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-255349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-255349: (3.223210851s)
--- PASS: TestForceSystemdFlag (29.50s)

                                                
                                    
x
+
TestForceSystemdEnv (42.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-535699 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-535699 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.345181626s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-535699 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-535699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-535699
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-535699: (2.302519419s)
--- PASS: TestForceSystemdEnv (42.95s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.9s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.90s)

                                                
                                    
x
+
TestErrorSpam/setup (21.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-370171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-370171 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-370171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-370171 --driver=docker  --container-runtime=containerd: (21.288851694s)
--- PASS: TestErrorSpam/setup (21.29s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 stop: (1.199753823s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370171 --log_dir /tmp/nospam-370171 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/15452-3381/.minikube/files/etc/test/nested/copy/10168/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-403028 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-403028 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.766740599s)
--- PASS: TestFunctional/serial/StartWithProxy (48.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (13.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-403028 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-403028 --alsologtostderr -v=8: (13.201223837s)
functional_test.go:659: soft start took 13.201768281s for "functional-403028" cluster.
--- PASS: TestFunctional/serial/SoftStart (13.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-403028 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 cache add registry.k8s.io/pause:3.3: (1.153623791s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-403028 /tmp/TestFunctionalserialCacheCmdcacheadd_local1430583244/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cache add minikube-local-cache-test:functional-403028
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 cache add minikube-local-cache-test:functional-403028: (1.05287383s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cache delete minikube-local-cache-test:functional-403028
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-403028
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (253.842603ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 cache reload: (1.033366384s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 kubectl -- --context functional-403028 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-403028 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (57.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-403028 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0711 00:26:18.870160   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:18.876229   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:18.886473   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:18.906711   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:18.946999   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:19.027822   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:19.188255   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:19.508915   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:20.149421   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:26:21.429876   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-403028 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (57.19503228s)
functional_test.go:757: restart took 57.195133435s for "functional-403028" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (57.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-403028 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 logs: (1.415697222s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 logs --file /tmp/TestFunctionalserialLogsFileCmd787853159/001/logs.txt
E0711 00:26:23.990240   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 logs --file /tmp/TestFunctionalserialLogsFileCmd787853159/001/logs.txt: (1.41768662s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-403028 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-403028
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-403028: exit status 115 (333.528579ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31476 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-403028 delete -f testdata/invalidsvc.yaml
E0711 00:26:29.110732   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 config get cpus: exit status 14 (78.801468ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 config get cpus: exit status 14 (62.714695ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-403028 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-403028 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 50436: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-403028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-403028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (210.793574ms)

                                                
                                                
-- stdout --
	* [functional-403028] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0711 00:26:52.326467   50033 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:26:52.326623   50033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:26:52.326632   50033 out.go:309] Setting ErrFile to fd 2...
	I0711 00:26:52.326638   50033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:26:52.326845   50033 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:26:52.327561   50033 out.go:303] Setting JSON to false
	I0711 00:26:52.329096   50033 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":564,"bootTime":1689034648,"procs":523,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:26:52.329167   50033 start.go:137] virtualization: kvm guest
	I0711 00:26:52.334715   50033 out.go:177] * [functional-403028] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0711 00:26:52.336788   50033 out.go:177]   - MINIKUBE_LOCATION=15452
	I0711 00:26:52.336802   50033 notify.go:220] Checking for updates...
	I0711 00:26:52.338420   50033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:26:52.339955   50033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:26:52.341513   50033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:26:52.343045   50033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0711 00:26:52.344480   50033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0711 00:26:52.347317   50033 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:26:52.348574   50033 driver.go:373] Setting default libvirt URI to qemu:///system
	I0711 00:26:52.385254   50033 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0711 00:26:52.385384   50033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:26:52.457289   50033 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-11 00:26:52.446473152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:26:52.457376   50033 docker.go:294] overlay module found
	I0711 00:26:52.460285   50033 out.go:177] * Using the docker driver based on existing profile
	I0711 00:26:52.461573   50033 start.go:297] selected driver: docker
	I0711 00:26:52.461592   50033 start.go:944] validating driver "docker" against &{Name:functional-403028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-403028 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:26:52.461714   50033 start.go:955] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0711 00:26:52.464156   50033 out.go:177] 
	W0711 00:26:52.465844   50033 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0711 00:26:52.467185   50033 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-403028 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-403028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-403028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (201.479553ms)

                                                
                                                
-- stdout --
	* [functional-403028] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0711 00:26:52.814545   50203 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:26:52.814783   50203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:26:52.814798   50203 out.go:309] Setting ErrFile to fd 2...
	I0711 00:26:52.814806   50203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:26:52.815030   50203 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:26:52.815738   50203 out.go:303] Setting JSON to false
	I0711 00:26:52.817857   50203 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":565,"bootTime":1689034648,"procs":523,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:26:52.817953   50203 start.go:137] virtualization: kvm guest
	I0711 00:26:52.821774   50203 out.go:177] * [functional-403028] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0711 00:26:52.823511   50203 out.go:177]   - MINIKUBE_LOCATION=15452
	I0711 00:26:52.823517   50203 notify.go:220] Checking for updates...
	I0711 00:26:52.825142   50203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:26:52.826760   50203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:26:52.828337   50203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:26:52.832796   50203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0711 00:26:52.834744   50203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0711 00:26:52.836575   50203 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:26:52.837155   50203 driver.go:373] Setting default libvirt URI to qemu:///system
	I0711 00:26:52.865508   50203 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0711 00:26:52.865626   50203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:26:52.948842   50203 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-07-11 00:26:52.934086753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:26:52.948976   50203 docker.go:294] overlay module found
	I0711 00:26:52.951779   50203 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0711 00:26:52.953195   50203 start.go:297] selected driver: docker
	I0711 00:26:52.953211   50203 start.go:944] validating driver "docker" against &{Name:functional-403028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1689032083-15452@sha256:41e03f55414b4bc4a9169ee03de8460ddd2a95f539efd83fce689159a4e20667 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-403028 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0711 00:26:52.953295   50203 start.go:955] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0711 00:26:52.955528   50203 out.go:177] 
	W0711 00:26:52.956942   50203 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0711 00:26:52.958550   50203 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-403028 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-403028 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-tckx7" [e39868df-194b-4c79-92b3-79b55a180b4f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-tckx7" [e39868df-194b-4c79-92b3-79b55a180b4f] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.008901117s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30273
functional_test.go:1674: http://192.168.49.2:30273: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-tckx7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30273
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b47bc0ef-12aa-42ef-9cc6-a2d4cf874d2e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009299105s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-403028 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-403028 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-403028 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-403028 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [22f78861-e28d-41a1-b394-f5437d312888] Pending
helpers_test.go:344: "sp-pod" [22f78861-e28d-41a1-b394-f5437d312888] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [22f78861-e28d-41a1-b394-f5437d312888] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009636582s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-403028 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-403028 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-403028 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a62e3592-9bab-4849-8bf8-89515d491de4] Pending
helpers_test.go:344: "sp-pod" [a62e3592-9bab-4849-8bf8-89515d491de4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a62e3592-9bab-4849-8bf8-89515d491de4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.012959723s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-403028 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh -n functional-403028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 cp functional-403028:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2948412301/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh -n functional-403028 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-403028 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-8w72s" [cd5d4985-8819-4104-b5f3-9e2e87267aa7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-8w72s" [cd5d4985-8819-4104-b5f3-9e2e87267aa7] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.009405034s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;": exit status 1 (295.568259ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;": exit status 1 (291.313451ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;": exit status 1 (156.515783ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;": exit status 1 (131.332988ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2023/07/11 00:27:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-403028 exec mysql-7db894d786-8w72s -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10168/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo cat /etc/test/nested/copy/10168/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10168.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo cat /etc/ssl/certs/10168.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10168.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo cat /usr/share/ca-certificates/10168.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/101682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo cat /etc/ssl/certs/101682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/101682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo cat /usr/share/ca-certificates/101682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-403028 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 ssh "sudo systemctl is-active docker": exit status 1 (276.724303ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 ssh "sudo systemctl is-active crio": exit status 1 (288.444775ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-403028 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-403028 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-j4xt9" [f39e64bf-f481-4153-a8f6-6b1dbf14ad54] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-j4xt9" [f39e64bf-f481-4153-a8f6-6b1dbf14ad54] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.045768578s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-403028 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-403028 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-403028 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-403028 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 44481: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-403028 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-403028 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [37a1e1a6-7984-4106-903b-a19756321124] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [37a1e1a6-7984-4106-903b-a19756321124] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.008101765s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 service list -o json
functional_test.go:1493: Took "509.964535ms" to run "out/minikube-linux-amd64 -p functional-403028 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 service --namespace=default --https --url hello-node
E0711 00:26:39.351370   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
functional_test.go:1521: found endpoint: https://192.168.49.2:30808
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30808
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-403028 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.177.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-403028 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-403028 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-403028
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-403028
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-403028 image ls --format short --alsologtostderr:
I0711 00:27:05.153062   51912 out.go:296] Setting OutFile to fd 1 ...
I0711 00:27:05.153190   51912 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.153200   51912 out.go:309] Setting ErrFile to fd 2...
I0711 00:27:05.153204   51912 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.153328   51912 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
I0711 00:27:05.153905   51912 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.154064   51912 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.154491   51912 cli_runner.go:164] Run: docker container inspect functional-403028 --format={{.State.Status}}
I0711 00:27:05.172430   51912 ssh_runner.go:195] Run: systemctl --version
I0711 00:27:05.172471   51912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-403028
I0711 00:27:05.197862   51912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/functional-403028/id_rsa Username:docker}
I0711 00:27:05.331138   51912 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-403028 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.27.3            | sha256:08a0c9 | 33.4MB |
| registry.k8s.io/kube-controller-manager     | v1.27.3            | sha256:7cffc0 | 31MB   |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| docker.io/library/nginx                     | latest             | sha256:021283 | 70.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:86b6af | 102MB  |
| registry.k8s.io/kube-proxy                  | v1.27.3            | sha256:578054 | 23.9MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/library/nginx                     | alpine             | sha256:493752 | 17MB   |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b0b1fa | 27.7MB |
| docker.io/library/mysql                     | 5.7                | sha256:2be84d | 169MB  |
| gcr.io/google-containers/addon-resizer      | functional-403028  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-scheduler              | v1.27.3            | sha256:41697c | 18.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/minikube-local-cache-test | functional-403028  | sha256:b7464e | 1.01kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-403028 image ls --format table --alsologtostderr:
I0711 00:27:05.428505   52001 out.go:296] Setting OutFile to fd 1 ...
I0711 00:27:05.428621   52001 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.428631   52001 out.go:309] Setting ErrFile to fd 2...
I0711 00:27:05.428636   52001 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.428780   52001 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
I0711 00:27:05.429329   52001 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.429420   52001 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.429792   52001 cli_runner.go:164] Run: docker container inspect functional-403028 --format={{.State.Status}}
I0711 00:27:05.448476   52001 ssh_runner.go:195] Run: systemctl --version
I0711 00:27:05.448520   52001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-403028
I0711 00:27:05.466216   52001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/functional-403028/id_rsa Username:docker}
I0711 00:27:05.551399   52001 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-403028 image ls --format json --alsologtostderr:
[{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io/library/mysql:5.7"],"size":"169282307"},{"id":"sha256:4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repo
Digests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16978757"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-403028"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTa
gs":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:b7464ea79fcfdc35ade9543d525ebdcbb7b8b546cdb623df4ee41909769e93c0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-403028"],"size":"1006"},{"id":"sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"33364386"},{"id":"sha256:5780543258
cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"23897400"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"27731571"},{"id":"sha256:021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef"],"repoTags":["docker.io/library/nginx:latest"],"size":"70601656"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDige
sts":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"101639218"},{"id":"sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"30973055"},{"id":"sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"18231737"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-403028 image ls --format json --alsologtostderr:
I0711 00:27:05.258514   51953 out.go:296] Setting OutFile to fd 1 ...
I0711 00:27:05.258641   51953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.258650   51953 out.go:309] Setting ErrFile to fd 2...
I0711 00:27:05.258656   51953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.258775   51953 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
I0711 00:27:05.259320   51953 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.259429   51953 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.259808   51953 cli_runner.go:164] Run: docker container inspect functional-403028 --format={{.State.Status}}
I0711 00:27:05.278373   51953 ssh_runner.go:195] Run: systemctl --version
I0711 00:27:05.278418   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-403028
I0711 00:27:05.298614   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/functional-403028/id_rsa Username:docker}
I0711 00:27:05.387118   51953 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-403028 image ls --format yaml --alsologtostderr:
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "30973055"
- id: sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "18231737"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "101639218"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
repoTags:
- docker.io/library/nginx:alpine
size: "16978757"
- id: sha256:021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
repoTags:
- docker.io/library/nginx:latest
size: "70601656"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "33364386"
- id: sha256:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "23897400"
- id: sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "27731571"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests:
- docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde
repoTags:
- docker.io/library/mysql:5.7
size: "169282307"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-403028
size: "10823156"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:b7464ea79fcfdc35ade9543d525ebdcbb7b8b546cdb623df4ee41909769e93c0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-403028
size: "1006"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-403028 image ls --format yaml --alsologtostderr:
I0711 00:27:05.478964   52031 out.go:296] Setting OutFile to fd 1 ...
I0711 00:27:05.479088   52031 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.479097   52031 out.go:309] Setting ErrFile to fd 2...
I0711 00:27:05.479104   52031 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.479223   52031 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
I0711 00:27:05.479801   52031 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.479917   52031 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.480288   52031 cli_runner.go:164] Run: docker container inspect functional-403028 --format={{.State.Status}}
I0711 00:27:05.500405   52031 ssh_runner.go:195] Run: systemctl --version
I0711 00:27:05.500476   52031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-403028
I0711 00:27:05.522869   52031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/functional-403028/id_rsa Username:docker}
I0711 00:27:05.611744   52031 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 ssh pgrep buildkitd: exit status 1 (297.707763ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image build -t localhost/my-image:functional-403028 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 image build -t localhost/my-image:functional-403028 testdata/build --alsologtostderr: (2.370043891s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-403028 image build -t localhost/my-image:functional-403028 testdata/build --alsologtostderr:
I0711 00:27:05.932940   52208 out.go:296] Setting OutFile to fd 1 ...
I0711 00:27:05.933097   52208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.933107   52208 out.go:309] Setting ErrFile to fd 2...
I0711 00:27:05.933112   52208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0711 00:27:05.933219   52208 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
I0711 00:27:05.933775   52208 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.934827   52208 config.go:182] Loaded profile config "functional-403028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0711 00:27:05.935818   52208 cli_runner.go:164] Run: docker container inspect functional-403028 --format={{.State.Status}}
I0711 00:27:05.953615   52208 ssh_runner.go:195] Run: systemctl --version
I0711 00:27:05.953668   52208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-403028
I0711 00:27:05.971320   52208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/functional-403028/id_rsa Username:docker}
I0711 00:27:06.058449   52208 build_images.go:151] Building image from path: /tmp/build.2970139601.tar
I0711 00:27:06.058509   52208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0711 00:27:06.066428   52208 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2970139601.tar
I0711 00:27:06.070244   52208 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2970139601.tar: stat -c "%s %y" /var/lib/minikube/build/build.2970139601.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2970139601.tar': No such file or directory
I0711 00:27:06.070291   52208 ssh_runner.go:362] scp /tmp/build.2970139601.tar --> /var/lib/minikube/build/build.2970139601.tar (3072 bytes)
I0711 00:27:06.096565   52208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2970139601
I0711 00:27:06.104140   52208 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2970139601 -xf /var/lib/minikube/build/build.2970139601.tar
I0711 00:27:06.113585   52208 containerd.go:378] Building image: /var/lib/minikube/build/build.2970139601
I0711 00:27:06.113646   52208 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2970139601 --local dockerfile=/var/lib/minikube/build/build.2970139601 --output type=image,name=localhost/my-image:functional-403028
#1 [internal] load .dockerignore
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 DONE 0.0s

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:38c849426f729855a17554284743557af3b868b18e8e90c3189491510ba2c571 0.0s done
#8 exporting config sha256:03765ca7854df704163607ae4c19c6ffc261d100808cee276ac4903db543509d 0.0s done
#8 naming to localhost/my-image:functional-403028 done
#8 DONE 0.1s
I0711 00:27:08.237295   52208 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2970139601 --local dockerfile=/var/lib/minikube/build/build.2970139601 --output type=image,name=localhost/my-image:functional-403028: (2.123619101s)
I0711 00:27:08.237355   52208 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2970139601
I0711 00:27:08.248694   52208 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2970139601.tar
I0711 00:27:08.257662   52208 build_images.go:207] Built localhost/my-image:functional-403028 from /tmp/build.2970139601.tar
I0711 00:27:08.257689   52208 build_images.go:123] succeeded building to: functional-403028
I0711 00:27:08.257694   52208 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-403028
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "305.299479ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "56.949842ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image load --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 image load --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr: (4.898213279s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "362.703703ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "79.290412ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdany-port840989697/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689035201593402591" to /tmp/TestFunctionalparallelMountCmdany-port840989697/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689035201593402591" to /tmp/TestFunctionalparallelMountCmdany-port840989697/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689035201593402591" to /tmp/TestFunctionalparallelMountCmdany-port840989697/001/test-1689035201593402591
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.242848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 11 00:26 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 11 00:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 11 00:26 test-1689035201593402591
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh cat /mount-9p/test-1689035201593402591
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-403028 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1d578085-c523-415a-a473-4f53a3bccc1b] Pending
helpers_test.go:344: "busybox-mount" [1d578085-c523-415a-a473-4f53a3bccc1b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1d578085-c523-415a-a473-4f53a3bccc1b] Running
helpers_test.go:344: "busybox-mount" [1d578085-c523-415a-a473-4f53a3bccc1b] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1d578085-c523-415a-a473-4f53a3bccc1b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008923699s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-403028 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdany-port840989697/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image load --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 image load --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr: (5.439023709s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdspecific-port3342965088/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.383522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdspecific-port3342965088/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-403028 ssh "sudo umount -f /mount-9p": exit status 1 (286.199169ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-403028 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdspecific-port3342965088/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2628284062/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2628284062/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2628284062/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-403028 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2628284062/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2628284062/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-403028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2628284062/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-403028
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image load --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 image load --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr: (5.789636047s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image save gcr.io/google-containers/addon-resizer:functional-403028 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
E0711 00:26:59.831974   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image rm gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.659066175s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-403028
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-403028 image save --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-403028 image save --daemon gcr.io/google-containers/addon-resizer:functional-403028 --alsologtostderr: (1.777204463s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-403028
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.82s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-403028
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-403028
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-403028
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (71.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-745307 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0711 00:27:40.792367   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-745307 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m11.278753193s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (71.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-745307 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-745307 addons enable ingress --alsologtostderr -v=5: (8.693525188s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.69s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-745307 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-745307 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-745307 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.123743s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-745307 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-745307 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d9825683-a862-46cb-8e9e-56a6af73699c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d9825683-a862-46cb-8e9e-56a6af73699c] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.007569069s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-745307 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-745307 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-745307 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-745307 addons disable ingress-dns --alsologtostderr -v=1
E0711 00:29:02.712931   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-745307 addons disable ingress-dns --alsologtostderr -v=1: (10.095901597s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-745307 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-745307 addons disable ingress --alsologtostderr -v=1: (7.31722549s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-554544 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-554544 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m20.522670776s)
--- PASS: TestJSONOutput/start/Command (80.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-554544 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-554544 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-554544 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-554544 --output=json --user=testUser: (5.737835917s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-922383 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-922383 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.45458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"91a25213-d1a6-41ac-a7f7-de14a11166bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-922383] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b9b718e-2eb5-4df4-988b-250d8710a429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15452"}}
	{"specversion":"1.0","id":"54acfb87-9ea6-4589-a045-112437b80c00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"607c22e8-58a5-4b57-84b4-b395279faf43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig"}}
	{"specversion":"1.0","id":"22a54baf-e974-464a-8db1-341f47804ca4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube"}}
	{"specversion":"1.0","id":"9db26c6c-8bc1-49d8-8066-97a3e3e7f6e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"eb98c308-505e-420b-8018-cf2775b83934","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80494adb-83d5-42dd-b2da-006297f99082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-922383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-922383
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-774178 --network=
E0711 00:31:18.870214   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-774178 --network=: (31.777417545s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-774178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-774178
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-774178: (2.122944868s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-057361 --network=bridge
E0711 00:31:29.831281   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:29.836578   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:29.846830   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:29.867243   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:29.907517   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:29.987878   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:30.148409   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:30.468993   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:31.110005   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:32.391189   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:34.951872   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:31:40.072622   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-057361 --network=bridge: (22.473865066s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-057361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-057361
E0711 00:31:46.554031   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-057361: (1.985032004s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.48s)

                                                
                                    
x
+
TestKicExistingNetwork (26.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-039323 --network=existing-network
E0711 00:31:50.313048   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:32:10.794149   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-039323 --network=existing-network: (24.019650407s)
helpers_test.go:175: Cleaning up "existing-network-039323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-039323
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-039323: (1.903700622s)
--- PASS: TestKicExistingNetwork (26.06s)

                                                
                                    
x
+
TestKicCustomSubnet (25.85s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-033923 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-033923 --subnet=192.168.60.0/24: (23.860144697s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-033923 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-033923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-033923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-033923: (1.976362378s)
--- PASS: TestKicCustomSubnet (25.85s)

                                                
                                    
x
+
TestKicStaticIP (26.52s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-799923 --static-ip=192.168.200.200
E0711 00:32:51.755010   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-799923 --static-ip=192.168.200.200: (24.29601062s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-799923 ip
helpers_test.go:175: Cleaning up "static-ip-799923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-799923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-799923: (2.107032388s)
--- PASS: TestKicStaticIP (26.52s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-471933 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-471933 --driver=docker  --container-runtime=containerd: (20.937326509s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-474313 --driver=docker  --container-runtime=containerd
E0711 00:33:33.862473   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:33.867766   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:33.878034   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:33.898282   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:33.938590   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:34.018902   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:34.179396   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:34.500065   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:35.140978   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:36.421533   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:38.982736   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:33:44.102929   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-474313 --driver=docker  --container-runtime=containerd: (23.858011035s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-471933
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-474313
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-474313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-474313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-474313: (1.860205684s)
helpers_test.go:175: Cleaning up "first-471933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-471933
E0711 00:33:54.343626   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-471933: (2.130409263s)
--- PASS: TestMinikubeProfile (49.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-252853 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-252853 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.857378532s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-252853 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-266839 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-266839 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.96673274s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-266839 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-252853 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-252853 --alsologtostderr -v=5: (1.626927932s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-266839 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-266839
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-266839: (1.200592316s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-266839
E0711 00:34:13.676796   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
E0711 00:34:14.823987   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-266839: (5.677219484s)
--- PASS: TestMountStart/serial/RestartStopped (6.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-266839 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-244497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0711 00:34:55.784895   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-244497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m45.26106612s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (29.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- rollout status deployment/busybox
E0711 00:36:17.705294   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:36:18.869928   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:36:29.831511   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-244497 -- rollout status deployment/busybox: (27.846682956s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-4v79p -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-r6b8s -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-4v79p -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-r6b8s -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-4v79p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-r6b8s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (29.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-4v79p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-4v79p -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-r6b8s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-244497 -- exec busybox-67b7f59bb-r6b8s -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-244497 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-244497 -v 3 --alsologtostderr: (14.376632888s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.96s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp testdata/cp-test.txt multinode-244497:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2244020379/001/cp-test_multinode-244497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497:/home/docker/cp-test.txt multinode-244497-m02:/home/docker/cp-test_multinode-244497_multinode-244497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m02 "sudo cat /home/docker/cp-test_multinode-244497_multinode-244497-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497:/home/docker/cp-test.txt multinode-244497-m03:/home/docker/cp-test_multinode-244497_multinode-244497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m03 "sudo cat /home/docker/cp-test_multinode-244497_multinode-244497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp testdata/cp-test.txt multinode-244497-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2244020379/001/cp-test_multinode-244497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497-m02:/home/docker/cp-test.txt multinode-244497:/home/docker/cp-test_multinode-244497-m02_multinode-244497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497 "sudo cat /home/docker/cp-test_multinode-244497-m02_multinode-244497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497-m02:/home/docker/cp-test.txt multinode-244497-m03:/home/docker/cp-test_multinode-244497-m02_multinode-244497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m03 "sudo cat /home/docker/cp-test_multinode-244497-m02_multinode-244497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp testdata/cp-test.txt multinode-244497-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2244020379/001/cp-test_multinode-244497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497-m03:/home/docker/cp-test.txt multinode-244497:/home/docker/cp-test_multinode-244497-m03_multinode-244497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497 "sudo cat /home/docker/cp-test_multinode-244497-m03_multinode-244497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 cp multinode-244497-m03:/home/docker/cp-test.txt multinode-244497-m02:/home/docker/cp-test_multinode-244497-m03_multinode-244497-m02.txt
E0711 00:36:57.517418   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 ssh -n multinode-244497-m02 "sudo cat /home/docker/cp-test_multinode-244497-m03_multinode-244497-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-244497 node stop m03: (1.207713694s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-244497 status: exit status 7 (465.268827ms)

                                                
                                                
-- stdout --
	multinode-244497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-244497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-244497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr: exit status 7 (451.812946ms)

                                                
                                                
-- stdout --
	multinode-244497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-244497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-244497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0711 00:36:59.771359  110565 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:36:59.771467  110565 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:36:59.771475  110565 out.go:309] Setting ErrFile to fd 2...
	I0711 00:36:59.771479  110565 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:36:59.771593  110565 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:36:59.771744  110565 out.go:303] Setting JSON to false
	I0711 00:36:59.771764  110565 mustload.go:65] Loading cluster: multinode-244497
	I0711 00:36:59.771804  110565 notify.go:220] Checking for updates...
	I0711 00:36:59.772099  110565 config.go:182] Loaded profile config "multinode-244497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:36:59.772110  110565 status.go:255] checking status of multinode-244497 ...
	I0711 00:36:59.772444  110565 cli_runner.go:164] Run: docker container inspect multinode-244497 --format={{.State.Status}}
	I0711 00:36:59.788436  110565 status.go:330] multinode-244497 host status = "Running" (err=<nil>)
	I0711 00:36:59.788453  110565 host.go:66] Checking if "multinode-244497" exists ...
	I0711 00:36:59.788672  110565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-244497
	I0711 00:36:59.805233  110565 host.go:66] Checking if "multinode-244497" exists ...
	I0711 00:36:59.805584  110565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0711 00:36:59.805630  110565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-244497
	I0711 00:36:59.821932  110565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/multinode-244497/id_rsa Username:docker}
	I0711 00:36:59.910736  110565 ssh_runner.go:195] Run: systemctl --version
	I0711 00:36:59.914255  110565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0711 00:36:59.926108  110565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:36:59.980866  110565 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2023-07-11 00:36:59.970404378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:36:59.981789  110565 kubeconfig.go:92] found "multinode-244497" server: "https://192.168.58.2:8443"
	I0711 00:36:59.981819  110565 api_server.go:166] Checking apiserver status ...
	I0711 00:36:59.981871  110565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0711 00:36:59.992887  110565 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup
	I0711 00:37:00.002745  110565 api_server.go:182] apiserver freezer: "5:freezer:/docker/acbfda0b494d4125d0e8ff921bc4f176792df81392f8a9ac32dcba95550d46d9/kubepods/burstable/podc2f7a60aba26fb13ae1d83a83fb00f99/db7bfd376284fda273c9bc976e84d1d0101ce1924f27d9323010720e48ae37c1"
	I0711 00:37:00.002832  110565 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/acbfda0b494d4125d0e8ff921bc4f176792df81392f8a9ac32dcba95550d46d9/kubepods/burstable/podc2f7a60aba26fb13ae1d83a83fb00f99/db7bfd376284fda273c9bc976e84d1d0101ce1924f27d9323010720e48ae37c1/freezer.state
	I0711 00:37:00.011073  110565 api_server.go:204] freezer state: "THAWED"
	I0711 00:37:00.011094  110565 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0711 00:37:00.015262  110565 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0711 00:37:00.015283  110565 status.go:421] multinode-244497 apiserver status = Running (err=<nil>)
	I0711 00:37:00.015296  110565 status.go:257] multinode-244497 status: &{Name:multinode-244497 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0711 00:37:00.015319  110565 status.go:255] checking status of multinode-244497-m02 ...
	I0711 00:37:00.015537  110565 cli_runner.go:164] Run: docker container inspect multinode-244497-m02 --format={{.State.Status}}
	I0711 00:37:00.031714  110565 status.go:330] multinode-244497-m02 host status = "Running" (err=<nil>)
	I0711 00:37:00.031733  110565 host.go:66] Checking if "multinode-244497-m02" exists ...
	I0711 00:37:00.031980  110565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-244497-m02
	I0711 00:37:00.046725  110565 host.go:66] Checking if "multinode-244497-m02" exists ...
	I0711 00:37:00.046959  110565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0711 00:37:00.046998  110565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-244497-m02
	I0711 00:37:00.062640  110565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15452-3381/.minikube/machines/multinode-244497-m02/id_rsa Username:docker}
	I0711 00:37:00.146954  110565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0711 00:37:00.158656  110565 status.go:257] multinode-244497-m02 status: &{Name:multinode-244497-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0711 00:37:00.158707  110565 status.go:255] checking status of multinode-244497-m03 ...
	I0711 00:37:00.159005  110565 cli_runner.go:164] Run: docker container inspect multinode-244497-m03 --format={{.State.Status}}
	I0711 00:37:00.176088  110565 status.go:330] multinode-244497-m03 host status = "Stopped" (err=<nil>)
	I0711 00:37:00.176112  110565 status.go:343] host is not running, skipping remaining checks
	I0711 00:37:00.176125  110565 status.go:257] multinode-244497-m03 status: &{Name:multinode-244497-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-244497 node start m03 --alsologtostderr: (9.75395786s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (132.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-244497
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-244497
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-244497: (24.855608556s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-244497 --wait=true -v=8 --alsologtostderr
E0711 00:38:33.861937   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
E0711 00:39:01.545781   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-244497 --wait=true -v=8 --alsologtostderr: (1m47.314717871s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-244497
--- PASS: TestMultiNode/serial/RestartKeepsNodes (132.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-244497 node delete m03: (4.125049102s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-244497 stop: (23.7444267s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-244497 status: exit status 7 (77.038012ms)

                                                
                                                
-- stdout --
	multinode-244497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-244497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr: exit status 7 (83.390961ms)

                                                
                                                
-- stdout --
	multinode-244497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-244497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0711 00:39:51.458654  121203 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:39:51.459040  121203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:39:51.459052  121203 out.go:309] Setting ErrFile to fd 2...
	I0711 00:39:51.459060  121203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:39:51.459318  121203 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:39:51.459792  121203 out.go:303] Setting JSON to false
	I0711 00:39:51.459863  121203 notify.go:220] Checking for updates...
	I0711 00:39:51.459822  121203 mustload.go:65] Loading cluster: multinode-244497
	I0711 00:39:51.460559  121203 config.go:182] Loaded profile config "multinode-244497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:39:51.460584  121203 status.go:255] checking status of multinode-244497 ...
	I0711 00:39:51.460951  121203 cli_runner.go:164] Run: docker container inspect multinode-244497 --format={{.State.Status}}
	I0711 00:39:51.482206  121203 status.go:330] multinode-244497 host status = "Stopped" (err=<nil>)
	I0711 00:39:51.482224  121203 status.go:343] host is not running, skipping remaining checks
	I0711 00:39:51.482230  121203 status.go:257] multinode-244497 status: &{Name:multinode-244497 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0711 00:39:51.482246  121203 status.go:255] checking status of multinode-244497-m02 ...
	I0711 00:39:51.482497  121203 cli_runner.go:164] Run: docker container inspect multinode-244497-m02 --format={{.State.Status}}
	I0711 00:39:51.499959  121203 status.go:330] multinode-244497-m02 host status = "Stopped" (err=<nil>)
	I0711 00:39:51.500008  121203 status.go:343] host is not running, skipping remaining checks
	I0711 00:39:51.500018  121203 status.go:257] multinode-244497-m02 status: &{Name:multinode-244497-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-244497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0711 00:41:18.870127   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-244497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m27.837699197s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-244497 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-244497
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-244497-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-244497-m02 --driver=docker  --container-runtime=containerd: exit status 14 (66.418571ms)

                                                
                                                
-- stdout --
	* [multinode-244497-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-244497-m02' is duplicated with machine name 'multinode-244497-m02' in profile 'multinode-244497'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-244497-m03 --driver=docker  --container-runtime=containerd
E0711 00:41:29.831247   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-244497-m03 --driver=docker  --container-runtime=containerd: (23.978943397s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-244497
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-244497: exit status 80 (271.033604ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-244497
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-244497-m03 already exists in multinode-244497-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-244497-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-244497-m03: (1.854750384s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.21s)

                                                
                                    
x
+
TestPreload (154.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-059927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0711 00:42:41.915171   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-059927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m2.067874169s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-059927 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-059927
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-059927: (11.939084642s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-059927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0711 00:43:33.862135   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-059927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m17.260021765s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-059927 image list
helpers_test.go:175: Cleaning up "test-preload-059927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-059927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-059927: (2.281488013s)
--- PASS: TestPreload (154.43s)

                                                
                                    
x
+
TestScheduledStopUnix (96.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-883443 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-883443 --memory=2048 --driver=docker  --container-runtime=containerd: (21.274278631s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883443 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-883443 -n scheduled-stop-883443
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883443 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883443 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-883443 -n scheduled-stop-883443
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-883443
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883443 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-883443
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-883443: exit status 7 (58.719493ms)

                                                
                                                
-- stdout --
	scheduled-stop-883443
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-883443 -n scheduled-stop-883443
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-883443 -n scheduled-stop-883443: exit status 7 (58.175937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-883443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-883443
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-883443: (4.363856088s)
--- PASS: TestScheduledStopUnix (96.89s)

                                                
                                    
x
+
TestInsufficientStorage (12.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-035590 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-035590 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.579300473s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c65243d-1f48-42ec-8561-06e7734893d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-035590] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc8eea52-2056-4308-972d-886bad8c1f00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15452"}}
	{"specversion":"1.0","id":"a7acbeb4-4037-4ce9-bd44-c928fc21ffa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"36495bb2-4df8-4787-b334-d892c08fc697","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig"}}
	{"specversion":"1.0","id":"e70010d8-dc99-44b1-a874-c5b1b754814d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube"}}
	{"specversion":"1.0","id":"a6b241ee-47f6-428a-800c-b2c499ba7bad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"59c76747-1f70-4259-86f0-469c6bb954f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"021309ae-2da1-4b2b-9e2c-25e1ea0c4d64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"edac8315-9182-4cc2-8531-8af734235c52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e659599b-84b6-419c-a73e-9f3439daf71b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bcd3b3a-daff-4422-9222-693ba5b16f4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3a1eac65-d552-4562-abac-5568bc5e3dfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-035590 in cluster insufficient-storage-035590","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9cd2c4ca-3d98-49b3-9287-f47e4d1f2c1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"17392382-f38b-4950-9410-ab944cb6e569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8eacaed9-1c8d-4a03-876b-06b137635e75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-035590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-035590 --output=json --layout=cluster: exit status 7 (258.050645ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0711 00:46:12.081245  142844 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-035590" does not appear in /home/jenkins/minikube-integration/15452-3381/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-035590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-035590 --output=json --layout=cluster: exit status 7 (258.788497ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0711 00:46:12.342191  142933 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-035590" does not appear in /home/jenkins/minikube-integration/15452-3381/kubeconfig
	E0711 00:46:12.352413  142933 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/insufficient-storage-035590/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-035590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-035590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-035590: (1.846087364s)
--- PASS: TestInsufficientStorage (12.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.391568676.exe start -p running-upgrade-938572 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.391568676.exe start -p running-upgrade-938572 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (53.452053908s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-938572 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-938572 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.14241815s)
helpers_test.go:175: Cleaning up "running-upgrade-938572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-938572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-938572: (4.898002399s)
--- PASS: TestRunningBinaryUpgrade (79.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (376.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (54.074988656s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-840111
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-840111: (1.404304493s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-840111 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-840111 status --format={{.Host}}: exit status 7 (87.704421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m43.839791027s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-840111 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (76.21644ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-840111] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-840111
	    minikube start -p kubernetes-upgrade-840111 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8401112 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-840111 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-840111 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.844762701s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-840111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-840111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-840111: (2.378434244s)
--- PASS: TestKubernetesUpgrade (376.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-485349 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-485349 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (83.405584ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-485349] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-485349 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-485349 --driver=docker  --container-runtime=containerd: (40.450813986s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-485349 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (137.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.3461116954.exe start -p stopped-upgrade-507666 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0711 00:46:18.869397   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:46:29.831172   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.3461116954.exe start -p stopped-upgrade-507666 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m14.115826576s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.3461116954.exe -p stopped-upgrade-507666 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.3461116954.exe -p stopped-upgrade-507666 stop: (20.104766147s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-507666 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-507666 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.230746308s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (137.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-485349 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-485349 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.037155653s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-485349 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-485349 status -o json: exit status 2 (367.642671ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-485349","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-485349
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-485349: (5.314619077s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-485349 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-485349 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.493520274s)
--- PASS: TestNoKubernetes/serial/Start (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-485349 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-485349 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.167945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.508473056s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-485349
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-485349: (2.393835031s)
--- PASS: TestNoKubernetes/serial/Stop (2.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-485349 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-485349 --driver=docker  --container-runtime=containerd: (6.288199307s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-485349 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-485349 "sudo systemctl is-active --quiet service kubelet": exit status 1 (299.161273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-507666
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-507666: (1.082429228s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-738578 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-738578 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (175.746719ms)

                                                
                                                
-- stdout --
	* [false-738578] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0711 00:48:38.753738  175551 out.go:296] Setting OutFile to fd 1 ...
	I0711 00:48:38.753866  175551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:48:38.753876  175551 out.go:309] Setting ErrFile to fd 2...
	I0711 00:48:38.753883  175551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0711 00:48:38.754041  175551 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-3381/.minikube/bin
	I0711 00:48:38.754594  175551 out.go:303] Setting JSON to false
	I0711 00:48:38.756134  175551 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1871,"bootTime":1689034648,"procs":789,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0711 00:48:38.756193  175551 start.go:137] virtualization: kvm guest
	I0711 00:48:38.759025  175551 out.go:177] * [false-738578] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0711 00:48:38.760873  175551 out.go:177]   - MINIKUBE_LOCATION=15452
	I0711 00:48:38.760922  175551 notify.go:220] Checking for updates...
	I0711 00:48:38.762318  175551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0711 00:48:38.764014  175551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15452-3381/kubeconfig
	I0711 00:48:38.766013  175551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-3381/.minikube
	I0711 00:48:38.767737  175551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0711 00:48:38.769722  175551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0711 00:48:38.773191  175551 config.go:182] Loaded profile config "kubernetes-upgrade-840111": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0711 00:48:38.773443  175551 config.go:182] Loaded profile config "missing-upgrade-576591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0711 00:48:38.773653  175551 config.go:182] Loaded profile config "running-upgrade-938572": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0711 00:48:38.773851  175551 driver.go:373] Setting default libvirt URI to qemu:///system
	I0711 00:48:38.806549  175551 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0711 00:48:38.806716  175551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0711 00:48:38.878239  175551 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:74 SystemTime:2023-07-11 00:48:38.868895623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0711 00:48:38.878330  175551 docker.go:294] overlay module found
	I0711 00:48:38.880278  175551 out.go:177] * Using the docker driver based on user configuration
	I0711 00:48:38.881612  175551 start.go:297] selected driver: docker
	I0711 00:48:38.881627  175551 start.go:944] validating driver "docker" against <nil>
	I0711 00:48:38.881639  175551 start.go:955] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0711 00:48:38.883625  175551 out.go:177] 
	W0711 00:48:38.884791  175551 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0711 00:48:38.886041  175551 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-738578 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-738578" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:48:34 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-840111
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:47:37 UTC
provider: minikube.sigs.k8s.io
version: v1.22.0
name: cluster_info
server: https://192.168.67.2:8443
name: missing-upgrade-576591
contexts:
- context:
cluster: kubernetes-upgrade-840111
user: kubernetes-upgrade-840111
name: kubernetes-upgrade-840111
- context:
cluster: missing-upgrade-576591
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:47:37 UTC
provider: minikube.sigs.k8s.io
version: v1.22.0
name: context_info
namespace: default
user: missing-upgrade-576591
name: missing-upgrade-576591
current-context: kubernetes-upgrade-840111
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-840111
user:
client-certificate: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/kubernetes-upgrade-840111/client.crt
client-key: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/kubernetes-upgrade-840111/client.key
- name: missing-upgrade-576591
user:
client-certificate: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/client.crt
client-key: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-738578

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-738578"

                                                
                                                
----------------------- debugLogs end: false-738578 [took: 3.646270297s] --------------------------------
helpers_test.go:175: Cleaning up "false-738578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-738578
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestPause/serial/Start (57.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-117196 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-117196 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (57.800659064s)
--- PASS: TestPause/serial/Start (57.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-884493 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0711 00:49:56.906093   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-884493 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m54.799511858s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-117196 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-117196 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.794882981s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-117196 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-117196 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-117196 --output=json --layout=cluster: exit status 2 (305.768179ms)

                                                
                                                
-- stdout --
	{"Name":"pause-117196","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-117196","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-117196 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-117196 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.54s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-117196 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-117196 --alsologtostderr -v=5: (2.539531254s)
--- PASS: TestPause/serial/DeletePaused (2.54s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-117196
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-117196: exit status 1 (18.473894ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-117196: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-617524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0711 00:51:18.870192   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:51:29.831026   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-617524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (57.670268576s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-884493 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc810f8e-16c2-462a-8c0e-24408132834c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc810f8e-16c2-462a-8c0e-24408132834c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013580743s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-884493 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-617524 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [26707c8a-b4b6-454c-a8d8-aac82b771152] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [26707c8a-b4b6-454c-a8d8-aac82b771152] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.016775741s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-617524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-884493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-884493 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-617524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-617524 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-884493 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-884493 --alsologtostderr -v=3: (12.132791548s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-617524 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-617524 --alsologtostderr -v=3: (12.098816391s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-884493 -n old-k8s-version-884493
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-884493 -n old-k8s-version-884493: exit status 7 (89.503149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-884493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (658.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-884493 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-884493 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (10m57.94932713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-884493 -n old-k8s-version-884493
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (658.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-617524 -n no-preload-617524
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-617524 -n no-preload-617524: exit status 7 (96.265335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-617524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (330.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-617524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-617524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m29.973347026s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-617524 -n no-preload-617524
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (330.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-079631 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-079631 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (55.055180052s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-079631 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f353b83b-58a4-4335-930d-4558f347eda8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f353b83b-58a4-4335-930d-4558f347eda8] Running
E0711 00:53:33.861531   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.016737819s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-079631 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-079631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-079631 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-079631 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-079631 --alsologtostderr -v=3: (13.211049646s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-634280 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-634280 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m13.566366429s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-079631 -n embed-certs-079631
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-079631 -n embed-certs-079631: exit status 7 (81.538007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-079631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (316.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-079631 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-079631 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m16.213736291s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-079631 -n embed-certs-079631
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (316.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-634280 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b2de9d91-06bc-4410-82c5-9623728fbaa5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b2de9d91-06bc-4410-82c5-9623728fbaa5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.015910936s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-634280 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-634280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-634280 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-634280 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-634280 --alsologtostderr -v=3: (11.954332029s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280: exit status 7 (63.989787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-634280 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (583.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-634280 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0711 00:56:18.870169   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 00:56:29.831153   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-634280 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (9m43.208759365s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (583.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xptcm" [976565f9-ddde-4f78-86b6-656ac19a102c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xptcm" [976565f9-ddde-4f78-86b6-656ac19a102c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.016330683s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xptcm" [976565f9-ddde-4f78-86b6-656ac19a102c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009581239s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-617524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-617524 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-617524 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-617524 -n no-preload-617524
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-617524 -n no-preload-617524: exit status 2 (293.301852ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-617524 -n no-preload-617524
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-617524 -n no-preload-617524: exit status 2 (316.801147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-617524 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-617524 -n no-preload-617524
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-617524 -n no-preload-617524
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-480788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-480788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (37.049449813s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-480788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-480788 --alsologtostderr -v=3
E0711 00:58:33.861646   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-480788 --alsologtostderr -v=3: (1.204555689s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-480788 -n newest-cni-480788
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-480788 -n newest-cni-480788: exit status 7 (65.312001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-480788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-480788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-480788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (37.539576225s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-480788 -n newest-cni-480788
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vdtvr" [3a63ed10-44d3-4370-9487-d533b60e02db] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vdtvr" [3a63ed10-44d3-4370-9487-d533b60e02db] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.017092771s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-480788 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-480788 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-480788 -n newest-cni-480788
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-480788 -n newest-cni-480788: exit status 2 (284.671837ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-480788 -n newest-cni-480788
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-480788 -n newest-cni-480788: exit status 2 (310.063401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-480788 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-480788 -n newest-cni-480788
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-480788 -n newest-cni-480788
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vdtvr" [3a63ed10-44d3-4370-9487-d533b60e02db] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010789535s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-079631 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.017236316s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-079631 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-079631 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-079631 -n embed-certs-079631
E0711 00:59:21.916440   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-079631 -n embed-certs-079631: exit status 2 (302.107232ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-079631 -n embed-certs-079631
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-079631 -n embed-certs-079631: exit status 2 (323.271797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-079631 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-079631 -n embed-certs-079631
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-079631 -n embed-certs-079631
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (50.881040765s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-738578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-738578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-t4sk5" [7a8636a4-c614-472f-a1b4-53bc264ca874] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-t4sk5" [7a8636a4-c614-472f-a1b4-53bc264ca874] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.009112897s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dp8qn" [f66405a5-71de-4f08-9185-6bb672ce4fab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016516745s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-738578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-738578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-738578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-lb5rh" [58e1c850-ae5e-490d-9889-05a1740e0875] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-lb5rh" [58e1c850-ae5e-490d-9889-05a1740e0875] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.007821332s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-738578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.669188671s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0711 01:01:18.869489   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/addons-906872/client.crt: no such file or directory
E0711 01:01:29.830793   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.241661119s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lm8v2" [3f96e506-7713-4e00-a7dd-306081cb7ed1] Running
E0711 01:01:43.612061   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
E0711 01:01:43.617361   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
E0711 01:01:43.627635   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
E0711 01:01:43.647931   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
E0711 01:01:43.688191   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
E0711 01:01:43.768514   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
E0711 01:01:43.929058   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
E0711 01:01:44.249197   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018242902s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-738578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-738578 replace --force -f testdata/netcat-deployment.yaml
E0711 01:01:44.890164   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-dmx8m" [7fffa828-a6cb-4156-946f-a471663200d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0711 01:01:46.171076   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-dmx8m" [7fffa828-a6cb-4156-946f-a471663200d9] Running
E0711 01:01:48.731838   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.006727533s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-738578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-738578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-r8dth" [4dff1e1f-0f1e-4b33-8e53-c01f14c02bfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-r8dth" [4dff1e1f-0f1e-4b33-8e53-c01f14c02bfc] Running
E0711 01:01:53.852435   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.006694331s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-738578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-738578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.522653514s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0711 01:02:24.572844   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.459043328s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-k5fmf" [13298c8e-4cb6-47ca-84e7-d9b30a46f3eb] Running
E0711 01:03:05.533435   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/no-preload-617524/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012809228s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-k5fmf" [13298c8e-4cb6-47ca-84e7-d9b30a46f3eb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006642755s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-884493 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-884493 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-884493 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-884493 -n old-k8s-version-884493
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-884493 -n old-k8s-version-884493: exit status 2 (285.777553ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-884493 -n old-k8s-version-884493
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-884493 -n old-k8s-version-884493: exit status 2 (298.229792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-884493 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-884493 -n old-k8s-version-884493
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-884493 -n old-k8s-version-884493
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-738578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (42.358589572s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wqt7l" [7b56774e-1567-484b-be91-2f63253730b0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.017518148s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-738578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-738578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-q4nvf" [20dac085-7854-4c10-8346-4531583bf848] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-q4nvf" [20dac085-7854-4c10-8346-4531583bf848] Running
E0711 01:03:33.861921   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/ingress-addon-legacy-745307/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.009970923s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-738578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-738578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-n89n7" [5fa165ea-9f2e-40ee-a4fa-8dc61e3f4bc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-n89n7" [5fa165ea-9f2e-40ee-a4fa-8dc61e3f4bc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00877956s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-738578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-738578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-738578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-738578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-7cwqb" [4e230180-cfac-4e3e-9916-27a19e32a7b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-7cwqb" [4e230180-cfac-4e3e-9916-27a19e32a7b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00600749s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-738578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-738578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0711 01:04:32.878850   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/functional-403028/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5z45c" [e0d156f5-eea6-43a9-b060-f433c3cca002] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014892371s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5z45c" [e0d156f5-eea6-43a9-b060-f433c3cca002] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006474028s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-634280 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-634280 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-634280 --alsologtostderr -v=1
E0711 01:05:10.656060   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
E0711 01:05:10.661717   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
E0711 01:05:10.672009   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
E0711 01:05:10.692406   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
E0711 01:05:10.732637   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
E0711 01:05:10.813790   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280
E0711 01:05:10.974556   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280: exit status 2 (284.123847ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280
E0711 01:05:11.294740   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280: exit status 2 (272.370316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-634280 --alsologtostderr -v=1
E0711 01:05:11.935772   10168 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/auto-738578/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-634280 -n default-k8s-diff-port-634280
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    

Test skip (23/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-695281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-695281
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-738578 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-738578" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:48:34 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-840111
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:47:37 UTC
provider: minikube.sigs.k8s.io
version: v1.22.0
name: cluster_info
server: https://192.168.67.2:8443
name: missing-upgrade-576591
contexts:
- context:
cluster: kubernetes-upgrade-840111
user: kubernetes-upgrade-840111
name: kubernetes-upgrade-840111
- context:
cluster: missing-upgrade-576591
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:47:37 UTC
provider: minikube.sigs.k8s.io
version: v1.22.0
name: context_info
namespace: default
user: missing-upgrade-576591
name: missing-upgrade-576591
current-context: kubernetes-upgrade-840111
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-840111
user:
client-certificate: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/kubernetes-upgrade-840111/client.crt
client-key: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/kubernetes-upgrade-840111/client.key
- name: missing-upgrade-576591
user:
client-certificate: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/client.crt
client-key: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-738578

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-738578"

                                                
                                                
----------------------- debugLogs end: kubenet-738578 [took: 2.800597841s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-738578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-738578
--- SKIP: TestNetworkPlugins/group/kubenet (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-738578 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-738578" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:48:34 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-840111
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15452-3381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:47:37 UTC
provider: minikube.sigs.k8s.io
version: v1.22.0
name: cluster_info
server: https://192.168.67.2:8443
name: missing-upgrade-576591
contexts:
- context:
cluster: kubernetes-upgrade-840111
user: kubernetes-upgrade-840111
name: kubernetes-upgrade-840111
- context:
cluster: missing-upgrade-576591
extensions:
- extension:
last-update: Tue, 11 Jul 2023 00:47:37 UTC
provider: minikube.sigs.k8s.io
version: v1.22.0
name: context_info
namespace: default
user: missing-upgrade-576591
name: missing-upgrade-576591
current-context: kubernetes-upgrade-840111
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-840111
user:
client-certificate: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/kubernetes-upgrade-840111/client.crt
client-key: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/kubernetes-upgrade-840111/client.key
- name: missing-upgrade-576591
user:
client-certificate: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/client.crt
client-key: /home/jenkins/minikube-integration/15452-3381/.minikube/profiles/missing-upgrade-576591/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-738578

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-738578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-738578"

                                                
                                                
----------------------- debugLogs end: cilium-738578 [took: 3.371394567s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-738578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-738578
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
Copied to clipboard