Test Report: Docker_Linux_containerd 16899

                    
                      f8194aff3a7b98ea29a2e4b2da65132feb1e4119:2023-07-17:30190
                    
                

Test fail (3/304)

Order failed test Duration
42 TestDockerEnvContainerd 34.87
102 TestFunctional/parallel/License 0.25
228 TestMissingContainerUpgrade 156.41
x
+
TestDockerEnvContainerd (34.87s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-533852 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-533852 --driver=docker  --container-runtime=containerd: (20.133654837s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-533852"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Zuf0GN6BlDiy/agent.33149" SSH_AGENT_PID="33150" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:220: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Zuf0GN6BlDiy/agent.33149" SSH_AGENT_PID="33150" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version": exit status 1 (142.974955ms)

                                                
                                                
-- stdout --
	Client: Docker Engine - Community
	 Version:           24.0.4
	 API version:       1.43
	 Go version:        go1.20.5
	 Git commit:        3713ee1
	 Built:             Fri Jul  7 14:50:57 2023
	 OS/Arch:           linux/amd64
	 Context:           default

                                                
                                                
-- /stdout --
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 32777 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
	@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
	Someone could be eavesdropping on you right now (man-in-the-middle attack)!
	It is also possible that a host key has just been changed.
	The fingerprint for the RSA key sent by the remote host is
	SHA256:PjMWRaJBoXyyfVwtpyQwlh6TkH+x8S+k4JhGPMZ7YMg.
	Please contact your system administrator.
	Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
	Offending RSA key in /home/jenkins/.ssh/known_hosts:247
	  remove with:
	  ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:32777"
	RSA host key for [127.0.0.1]:32777 has changed and you have requested strict checking.
	Host key verification failed.
	

                                                
                                                
** /stderr **
docker_test.go:222: failed to execute 'docker version', error: exit status 1, output: 
-- stdout --
	Client: Docker Engine - Community
	 Version:           24.0.4
	 API version:       1.43
	 Go version:        go1.20.5
	 Git commit:        3713ee1
	 Built:             Fri Jul  7 14:50:57 2023
	 OS/Arch:           linux/amd64
	 Context:           default

                                                
                                                
-- /stdout --
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 32777 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
	@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
	Someone could be eavesdropping on you right now (man-in-the-middle attack)!
	It is also possible that a host key has just been changed.
	The fingerprint for the RSA key sent by the remote host is
	SHA256:PjMWRaJBoXyyfVwtpyQwlh6TkH+x8S+k4JhGPMZ7YMg.
	Please contact your system administrator.
	Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
	Offending RSA key in /home/jenkins/.ssh/known_hosts:247
	  remove with:
	  ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:32777"
	RSA host key for [127.0.0.1]:32777 has changed and you have requested strict checking.
	Host key verification failed.
	

                                                
                                                
** /stderr **
panic.go:522: *** TestDockerEnvContainerd FAILED at 2023-07-17 21:44:10.682460853 +0000 UTC m=+351.057333785
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect dockerenv-533852
helpers_test.go:235: (dbg) docker inspect dockerenv-533852:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c64ad3d13b6f11d15dcdd6837bd74f9cb9c7dc6bc250db720fd773d3627b840",
	        "Created": "2023-07-17T21:43:45.295922751Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T21:43:45.557689873Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/2c64ad3d13b6f11d15dcdd6837bd74f9cb9c7dc6bc250db720fd773d3627b840/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c64ad3d13b6f11d15dcdd6837bd74f9cb9c7dc6bc250db720fd773d3627b840/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c64ad3d13b6f11d15dcdd6837bd74f9cb9c7dc6bc250db720fd773d3627b840/hosts",
	        "LogPath": "/var/lib/docker/containers/2c64ad3d13b6f11d15dcdd6837bd74f9cb9c7dc6bc250db720fd773d3627b840/2c64ad3d13b6f11d15dcdd6837bd74f9cb9c7dc6bc250db720fd773d3627b840-json.log",
	        "Name": "/dockerenv-533852",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-533852:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-533852",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c58ca4290e1b35468ee23ab303d2a86678db813fdf334b040ad7b88edf0297ef-init/diff:/var/lib/docker/overlay2/421db7cbff057550e9afb7e972fc7bf38750c383ed66811acc54b46111c29dfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c58ca4290e1b35468ee23ab303d2a86678db813fdf334b040ad7b88edf0297ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c58ca4290e1b35468ee23ab303d2a86678db813fdf334b040ad7b88edf0297ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c58ca4290e1b35468ee23ab303d2a86678db813fdf334b040ad7b88edf0297ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-533852",
	                "Source": "/var/lib/docker/volumes/dockerenv-533852/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-533852",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-533852",
	                "name.minikube.sigs.k8s.io": "dockerenv-533852",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba43f469a6640133f9c9d57808c17cd8ade9769a08127e4edac0a9f77e427055",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32777"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32776"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32773"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32774"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ba43f469a664",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-533852": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2c64ad3d13b6",
	                        "dockerenv-533852"
	                    ],
	                    "NetworkID": "923bf71bee53a1680aef23c05e960925bdbd7252fd1d347aeafa4520e59f363c",
	                    "EndpointID": "e60ab504c1de25f3cb7f4b92f1d23c84a1339b66b3436f83ee3e808418c01ccc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-533852 -n dockerenv-533852
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-533852 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p dockerenv-533852 logs -n 25: (1.245017518s)
helpers_test.go:252: TestDockerEnvContainerd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete     | -p download-docker-997614      | download-docker-997614 | jenkins | v1.31.0 | 17 Jul 23 21:39 UTC | 17 Jul 23 21:39 UTC |
	| start      | --download-only -p             | binary-mirror-381755   | jenkins | v1.31.0 | 17 Jul 23 21:39 UTC |                     |
	|            | binary-mirror-381755           |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --binary-mirror                |                        |         |         |                     |                     |
	|            | http://127.0.0.1:39781         |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete     | -p binary-mirror-381755        | binary-mirror-381755   | jenkins | v1.31.0 | 17 Jul 23 21:39 UTC | 17 Jul 23 21:39 UTC |
	| start      | -p addons-767732               | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:39 UTC | 17 Jul 23 21:41 UTC |
	|            | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --addons=registry              |                        |         |         |                     |                     |
	|            | --addons=metrics-server        |                        |         |         |                     |                     |
	|            | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|            | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|            | --addons=gcp-auth              |                        |         |         |                     |                     |
	|            | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|            | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	|            | --addons=ingress               |                        |         |         |                     |                     |
	|            | --addons=ingress-dns           |                        |         |         |                     |                     |
	|            | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons     | disable inspektor-gadget -p    | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | addons-767732                  |                        |         |         |                     |                     |
	| addons     | disable cloud-spanner -p       | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | addons-767732                  |                        |         |         |                     |                     |
	| addons     | addons-767732 addons           | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | disable metrics-server         |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | enable headlamp                | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | -p addons-767732               |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ssh        | addons-767732 ssh curl -s      | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|            | nginx.example.com'             |                        |         |         |                     |                     |
	| ip         | addons-767732 ip               | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	| addons     | addons-767732 addons disable   | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-767732 addons disable   | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| ip         | addons-767732 ip               | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	| addons     | addons-767732 addons disable   | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | registry --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-767732 addons disable   | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:41 UTC | 17 Jul 23 21:41 UTC |
	|            | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-767732 addons           | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:42 UTC | 17 Jul 23 21:43 UTC |
	|            | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-767732 addons           | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|            | disable volumesnapshots        |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-767732 addons disable   | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|            | gcp-auth --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| stop       | -p addons-767732               | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	| addons     | enable dashboard -p            | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|            | addons-767732                  |                        |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|            | addons-767732                  |                        |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|            | addons-767732                  |                        |         |         |                     |                     |
	| delete     | -p addons-767732               | addons-767732          | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	| start      | -p dockerenv-533852            | dockerenv-533852       | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-533852       | jenkins | v1.31.0 | 17 Jul 23 21:44 UTC | 17 Jul 23 21:44 UTC |
	|            | dockerenv-533852               |                        |         |         |                     |                     |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:43:39
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:43:39.618386   30482 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:43:39.618495   30482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:43:39.618497   30482 out.go:309] Setting ErrFile to fd 2...
	I0717 21:43:39.618501   30482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:43:39.618686   30482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 21:43:39.619237   30482 out.go:303] Setting JSON to false
	I0717 21:43:39.620315   30482 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1566,"bootTime":1689628654,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:43:39.620367   30482 start.go:138] virtualization: kvm guest
	I0717 21:43:39.623144   30482 out.go:177] * [dockerenv-533852] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:43:39.624666   30482 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:43:39.624650   30482 notify.go:220] Checking for updates...
	I0717 21:43:39.626099   30482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:43:39.627467   30482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 21:43:39.628879   30482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 21:43:39.630211   30482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:43:39.631468   30482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:43:39.632989   30482 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:43:39.654699   30482 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:43:39.654785   30482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:43:39.706968   30482 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 21:43:39.698168317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:43:39.707085   30482 docker.go:294] overlay module found
	I0717 21:43:39.708869   30482 out.go:177] * Using the docker driver based on user configuration
	I0717 21:43:39.710205   30482 start.go:298] selected driver: docker
	I0717 21:43:39.710212   30482 start.go:880] validating driver "docker" against <nil>
	I0717 21:43:39.710220   30482 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:43:39.710305   30482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:43:39.763085   30482 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 21:43:39.75460605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:43:39.763227   30482 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:43:39.763680   30482 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0717 21:43:39.763881   30482 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 21:43:39.765486   30482 out.go:177] * Using Docker driver with root privileges
	I0717 21:43:39.766781   30482 cni.go:84] Creating CNI manager for ""
	I0717 21:43:39.766789   30482 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 21:43:39.766796   30482 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:43:39.766802   30482 start_flags.go:319] config:
	{Name:dockerenv-533852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-533852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:43:39.768480   30482 out.go:177] * Starting control plane node dockerenv-533852 in cluster dockerenv-533852
	I0717 21:43:39.769708   30482 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 21:43:39.770984   30482 out.go:177] * Pulling base image ...
	I0717 21:43:39.772308   30482 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:43:39.772333   30482 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0717 21:43:39.772342   30482 cache.go:57] Caching tarball of preloaded images
	I0717 21:43:39.772394   30482 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:43:39.772415   30482 preload.go:174] Found /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 21:43:39.772421   30482 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0717 21:43:39.772705   30482 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/config.json ...
	I0717 21:43:39.772721   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/config.json: {Name:mk326fd4720c3c5c4711c3dcfa6f42ef7fe11393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:39.787283   30482 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 21:43:39.787293   30482 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 21:43:39.787309   30482 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:43:39.787337   30482 start.go:365] acquiring machines lock for dockerenv-533852: {Name:mk5c3af375a0cc6937ec457bd265e7b5f38843cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:39.787417   30482 start.go:369] acquired machines lock for "dockerenv-533852" in 67.332µs
	I0717 21:43:39.787433   30482 start.go:93] Provisioning new machine with config: &{Name:dockerenv-533852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-533852 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 21:43:39.787519   30482 start.go:125] createHost starting for "" (driver="docker")
	I0717 21:43:39.789479   30482 out.go:204] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0717 21:43:39.789674   30482 start.go:159] libmachine.API.Create for "dockerenv-533852" (driver="docker")
	I0717 21:43:39.789693   30482 client.go:168] LocalClient.Create starting
	I0717 21:43:39.789751   30482 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem
	I0717 21:43:39.789773   30482 main.go:141] libmachine: Decoding PEM data...
	I0717 21:43:39.789784   30482 main.go:141] libmachine: Parsing certificate...
	I0717 21:43:39.789827   30482 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem
	I0717 21:43:39.789839   30482 main.go:141] libmachine: Decoding PEM data...
	I0717 21:43:39.789846   30482 main.go:141] libmachine: Parsing certificate...
	I0717 21:43:39.790114   30482 cli_runner.go:164] Run: docker network inspect dockerenv-533852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 21:43:39.804948   30482 cli_runner.go:211] docker network inspect dockerenv-533852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 21:43:39.805005   30482 network_create.go:281] running [docker network inspect dockerenv-533852] to gather additional debugging logs...
	I0717 21:43:39.805015   30482 cli_runner.go:164] Run: docker network inspect dockerenv-533852
	W0717 21:43:39.818908   30482 cli_runner.go:211] docker network inspect dockerenv-533852 returned with exit code 1
	I0717 21:43:39.818922   30482 network_create.go:284] error running [docker network inspect dockerenv-533852]: docker network inspect dockerenv-533852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-533852 not found
	I0717 21:43:39.818929   30482 network_create.go:286] output of [docker network inspect dockerenv-533852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-533852 not found
	
	** /stderr **
	I0717 21:43:39.818961   30482 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:43:39.833786   30482 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013de410}
	I0717 21:43:39.833822   30482 network_create.go:123] attempt to create docker network dockerenv-533852 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 21:43:39.833887   30482 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-533852 dockerenv-533852
	I0717 21:43:39.883433   30482 network_create.go:107] docker network dockerenv-533852 192.168.49.0/24 created
	I0717 21:43:39.883452   30482 kic.go:117] calculated static IP "192.168.49.2" for the "dockerenv-533852" container
	I0717 21:43:39.883513   30482 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 21:43:39.898440   30482 cli_runner.go:164] Run: docker volume create dockerenv-533852 --label name.minikube.sigs.k8s.io=dockerenv-533852 --label created_by.minikube.sigs.k8s.io=true
	I0717 21:43:39.916032   30482 oci.go:103] Successfully created a docker volume dockerenv-533852
	I0717 21:43:39.916109   30482 cli_runner.go:164] Run: docker run --rm --name dockerenv-533852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-533852 --entrypoint /usr/bin/test -v dockerenv-533852:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 21:43:40.431628   30482 oci.go:107] Successfully prepared a docker volume dockerenv-533852
	I0717 21:43:40.431659   30482 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:43:40.431677   30482 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 21:43:40.431745   30482 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-533852:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 21:43:45.227292   30482 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-533852:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.795480762s)
	I0717 21:43:45.227315   30482 kic.go:199] duration metric: took 4.795635 seconds to extract preloaded images to volume
	W0717 21:43:45.227607   30482 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 21:43:45.227683   30482 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 21:43:45.281199   30482 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-533852 --name dockerenv-533852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-533852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-533852 --network dockerenv-533852 --ip 192.168.49.2 --volume dockerenv-533852:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 21:43:45.564616   30482 cli_runner.go:164] Run: docker container inspect dockerenv-533852 --format={{.State.Running}}
	I0717 21:43:45.580639   30482 cli_runner.go:164] Run: docker container inspect dockerenv-533852 --format={{.State.Status}}
	I0717 21:43:45.597024   30482 cli_runner.go:164] Run: docker exec dockerenv-533852 stat /var/lib/dpkg/alternatives/iptables
	I0717 21:43:45.655575   30482 oci.go:144] the created container "dockerenv-533852" has a running status.
	I0717 21:43:45.655594   30482 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa...
	I0717 21:43:45.848564   30482 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 21:43:45.868485   30482 cli_runner.go:164] Run: docker container inspect dockerenv-533852 --format={{.State.Status}}
	I0717 21:43:45.888519   30482 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 21:43:45.888535   30482 kic_runner.go:114] Args: [docker exec --privileged dockerenv-533852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 21:43:45.948442   30482 cli_runner.go:164] Run: docker container inspect dockerenv-533852 --format={{.State.Status}}
	I0717 21:43:45.969989   30482 machine.go:88] provisioning docker machine ...
	I0717 21:43:45.970019   30482 ubuntu.go:169] provisioning hostname "dockerenv-533852"
	I0717 21:43:45.970079   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:45.986085   30482 main.go:141] libmachine: Using SSH client type: native
	I0717 21:43:45.986729   30482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32777 <nil> <nil>}
	I0717 21:43:45.986747   30482 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-533852 && echo "dockerenv-533852" | sudo tee /etc/hostname
	I0717 21:43:46.217195   30482 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-533852
	
	I0717 21:43:46.217268   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:46.232877   30482 main.go:141] libmachine: Using SSH client type: native
	I0717 21:43:46.233429   30482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32777 <nil> <nil>}
	I0717 21:43:46.233451   30482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-533852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-533852/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-533852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:43:46.355345   30482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:43:46.355363   30482 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-6342/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-6342/.minikube}
	I0717 21:43:46.355390   30482 ubuntu.go:177] setting up certificates
	I0717 21:43:46.355399   30482 provision.go:83] configureAuth start
	I0717 21:43:46.355444   30482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-533852
	I0717 21:43:46.371050   30482 provision.go:138] copyHostCerts
	I0717 21:43:46.371106   30482 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem, removing ...
	I0717 21:43:46.371116   30482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem
	I0717 21:43:46.371172   30482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem (1082 bytes)
	I0717 21:43:46.371254   30482 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem, removing ...
	I0717 21:43:46.371256   30482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem
	I0717 21:43:46.371278   30482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem (1123 bytes)
	I0717 21:43:46.371332   30482 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem, removing ...
	I0717 21:43:46.371335   30482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem
	I0717 21:43:46.371352   30482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem (1675 bytes)
	I0717 21:43:46.371400   30482 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem org=jenkins.dockerenv-533852 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube dockerenv-533852]
	I0717 21:43:46.780190   30482 provision.go:172] copyRemoteCerts
	I0717 21:43:46.780231   30482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:43:46.780259   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:46.795438   30482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa Username:docker}
	I0717 21:43:46.883304   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:43:46.902810   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 21:43:46.922173   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 21:43:46.941010   30482 provision.go:86] duration metric: configureAuth took 585.601148ms
	I0717 21:43:46.941025   30482 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:43:46.941175   30482 config.go:182] Loaded profile config "dockerenv-533852": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:43:46.941180   30482 machine.go:91] provisioned docker machine in 971.181109ms
	I0717 21:43:46.941184   30482 client.go:171] LocalClient.Create took 7.151488105s
	I0717 21:43:46.941203   30482 start.go:167] duration metric: libmachine.API.Create for "dockerenv-533852" took 7.151529225s
	I0717 21:43:46.941208   30482 start.go:300] post-start starting for "dockerenv-533852" (driver="docker")
	I0717 21:43:46.941215   30482 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:43:46.941252   30482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:43:46.941283   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:46.956915   30482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa Username:docker}
	I0717 21:43:47.047617   30482 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:43:47.050394   30482 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:43:47.050432   30482 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:43:47.050443   30482 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:43:47.050447   30482 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 21:43:47.050454   30482 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/addons for local assets ...
	I0717 21:43:47.050495   30482 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/files for local assets ...
	I0717 21:43:47.050509   30482 start.go:303] post-start completed in 109.296682ms
	I0717 21:43:47.050755   30482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-533852
	I0717 21:43:47.065871   30482 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/config.json ...
	I0717 21:43:47.066073   30482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:43:47.066101   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:47.081081   30482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa Username:docker}
	I0717 21:43:47.168072   30482 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:43:47.172004   30482 start.go:128] duration metric: createHost completed in 7.384462687s
	I0717 21:43:47.172019   30482 start.go:83] releasing machines lock for "dockerenv-533852", held for 7.384597743s
	I0717 21:43:47.172086   30482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-533852
	I0717 21:43:47.187229   30482 ssh_runner.go:195] Run: cat /version.json
	I0717 21:43:47.187266   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:47.187327   30482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:43:47.187374   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:47.202470   30482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa Username:docker}
	I0717 21:43:47.203864   30482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa Username:docker}
	I0717 21:43:47.372799   30482 ssh_runner.go:195] Run: systemctl --version
	I0717 21:43:47.376581   30482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:43:47.380399   30482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 21:43:47.400853   30482 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:43:47.400913   30482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:43:47.423835   30482 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 21:43:47.423844   30482 start.go:466] detecting cgroup driver to use...
	I0717 21:43:47.423873   30482 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:43:47.423911   30482 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 21:43:47.433912   30482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 21:43:47.442804   30482 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:43:47.442830   30482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:43:47.453709   30482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:43:47.464848   30482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:43:47.541435   30482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:43:47.617424   30482 docker.go:212] disabling docker service ...
	I0717 21:43:47.617463   30482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:43:47.635033   30482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:43:47.644839   30482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:43:47.717689   30482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:43:47.793284   30482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:43:47.802507   30482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:43:47.815518   30482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 21:43:47.823457   30482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 21:43:47.831448   30482 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 21:43:47.831479   30482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 21:43:47.839243   30482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 21:43:47.847089   30482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 21:43:47.854653   30482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 21:43:47.862550   30482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:43:47.869744   30482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 21:43:47.877523   30482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:43:47.884142   30482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:43:47.890625   30482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:43:47.960554   30482 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 21:43:48.031230   30482 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 21:43:48.031288   30482 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 21:43:48.035079   30482 start.go:534] Will wait 60s for crictl version
	I0717 21:43:48.035124   30482 ssh_runner.go:195] Run: which crictl
	I0717 21:43:48.038055   30482 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:43:48.071139   30482 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0717 21:43:48.071178   30482 ssh_runner.go:195] Run: containerd --version
	I0717 21:43:48.093540   30482 ssh_runner.go:195] Run: containerd --version
	I0717 21:43:48.117365   30482 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0717 21:43:48.118857   30482 cli_runner.go:164] Run: docker network inspect dockerenv-533852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:43:48.134895   30482 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 21:43:48.138231   30482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:43:48.147904   30482 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:43:48.147945   30482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:43:48.177089   30482 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 21:43:48.177101   30482 containerd.go:518] Images already preloaded, skipping extraction
	I0717 21:43:48.177140   30482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:43:48.206151   30482 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 21:43:48.206161   30482 cache_images.go:84] Images are preloaded, skipping loading
	I0717 21:43:48.206202   30482 ssh_runner.go:195] Run: sudo crictl info
	I0717 21:43:48.235103   30482 cni.go:84] Creating CNI manager for ""
	I0717 21:43:48.235113   30482 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 21:43:48.235123   30482 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:43:48.235139   30482 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-533852 NodeName:dockerenv-533852 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 21:43:48.235248   30482 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-533852"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:43:48.235305   30482 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=dockerenv-533852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:dockerenv-533852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:43:48.235350   30482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 21:43:48.242965   30482 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:43:48.243020   30482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:43:48.250293   30482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0717 21:43:48.265002   30482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 21:43:48.279647   30482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 21:43:48.294065   30482 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 21:43:48.296935   30482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:43:48.305960   30482 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852 for IP: 192.168.49.2
	I0717 21:43:48.305976   30482 certs.go:190] acquiring lock for shared ca certs: {Name:mk55d4c61e71de076f17ec844eb5cb8d7320ed01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:48.306092   30482 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-6342/.minikube/ca.key
	I0717 21:43:48.306134   30482 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.key
	I0717 21:43:48.306196   30482 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/client.key
	I0717 21:43:48.306210   30482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/client.crt with IP's: []
	I0717 21:43:48.805355   30482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/client.crt ...
	I0717 21:43:48.805369   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/client.crt: {Name:mkf04312032a6d820113defa72ff54964e1ffaae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:48.805554   30482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/client.key ...
	I0717 21:43:48.805561   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/client.key: {Name:mk1f3229d3b26035fc9d5ffc129440a81c8635c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:48.805657   30482 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.key.dd3b5fb2
	I0717 21:43:48.805666   30482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:43:48.919731   30482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.crt.dd3b5fb2 ...
	I0717 21:43:48.919742   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.crt.dd3b5fb2: {Name:mka7d55c71e712b85de49d37e56c6ed2f86390f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:48.919925   30482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.key.dd3b5fb2 ...
	I0717 21:43:48.919933   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.key.dd3b5fb2: {Name:mk61c568614f1a64c72a025dc17db2351b5692a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:48.920021   30482 certs.go:337] copying /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.crt
	I0717 21:43:48.920080   30482 certs.go:341] copying /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.key
	I0717 21:43:48.920121   30482 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.key
	I0717 21:43:48.920131   30482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.crt with IP's: []
	I0717 21:43:49.045407   30482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.crt ...
	I0717 21:43:49.045421   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.crt: {Name:mke4ce4a10f08094370b7f3e0e7475f41909110c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:49.045595   30482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.key ...
	I0717 21:43:49.045604   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.key: {Name:mkc9a2de90070d3dc9405ec2144f385080a1fb2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:49.045776   30482 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 21:43:49.045814   30482 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem (1082 bytes)
	I0717 21:43:49.045833   30482 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:43:49.045851   30482 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem (1675 bytes)
	I0717 21:43:49.046410   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:43:49.067052   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 21:43:49.086099   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:43:49.104892   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/dockerenv-533852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 21:43:49.124744   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:43:49.145327   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 21:43:49.165158   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:43:49.184451   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 21:43:49.203515   30482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:43:49.222954   30482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:43:49.237250   30482 ssh_runner.go:195] Run: openssl version
	I0717 21:43:49.241848   30482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:43:49.249407   30482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:43:49.252341   30482 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:39 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:43:49.252369   30482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:43:49.258040   30482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:43:49.265673   30482 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:43:49.268380   30482 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:43:49.268411   30482 kubeadm.go:404] StartCluster: {Name:dockerenv-533852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-533852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0}
	I0717 21:43:49.268471   30482 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 21:43:49.268497   30482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:43:49.298356   30482 cri.go:89] found id: ""
	I0717 21:43:49.298405   30482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:43:49.305596   30482 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:43:49.312642   30482 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 21:43:49.312690   30482 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:43:49.320002   30482 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:43:49.320041   30482 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 21:43:49.397485   30482 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 21:43:49.460818   30482 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:43:57.938954   30482 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 21:43:57.939024   30482 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:43:57.939118   30482 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 21:43:57.939181   30482 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 21:43:57.939211   30482 kubeadm.go:322] OS: Linux
	I0717 21:43:57.939246   30482 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 21:43:57.939292   30482 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 21:43:57.939329   30482 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 21:43:57.939385   30482 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 21:43:57.939422   30482 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 21:43:57.939493   30482 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 21:43:57.939535   30482 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 21:43:57.939573   30482 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 21:43:57.939609   30482 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 21:43:57.939667   30482 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:43:57.939746   30482 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:43:57.939852   30482 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:43:57.939906   30482 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:43:57.941406   30482 out.go:204]   - Generating certificates and keys ...
	I0717 21:43:57.941479   30482 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:43:57.941541   30482 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:43:57.941593   30482 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:43:57.941650   30482 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:43:57.941700   30482 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:43:57.941743   30482 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:43:57.941793   30482 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:43:57.941898   30482 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [dockerenv-533852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:43:57.941967   30482 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:43:57.942099   30482 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-533852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:43:57.942154   30482 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:43:57.942204   30482 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:43:57.942270   30482 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:43:57.942349   30482 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:43:57.942414   30482 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:43:57.942477   30482 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:43:57.942564   30482 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:43:57.942632   30482 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:43:57.942766   30482 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:43:57.942857   30482 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:43:57.942888   30482 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:43:57.942973   30482 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:43:57.944344   30482 out.go:204]   - Booting up control plane ...
	I0717 21:43:57.944436   30482 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:43:57.944495   30482 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:43:57.944546   30482 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:43:57.944608   30482 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:43:57.944736   30482 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:43:57.944802   30482 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001812 seconds
	I0717 21:43:57.944898   30482 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:43:57.944998   30482 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:43:57.945048   30482 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:43:57.945200   30482 kubeadm.go:322] [mark-control-plane] Marking the node dockerenv-533852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 21:43:57.945249   30482 kubeadm.go:322] [bootstrap-token] Using token: ir1t8l.7ihf14st871v1quk
	I0717 21:43:57.946781   30482 out.go:204]   - Configuring RBAC rules ...
	I0717 21:43:57.946893   30482 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:43:57.946967   30482 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:43:57.947077   30482 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:43:57.947183   30482 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:43:57.947278   30482 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:43:57.947355   30482 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:43:57.947450   30482 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:43:57.947487   30482 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:43:57.947523   30482 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:43:57.947526   30482 kubeadm.go:322] 
	I0717 21:43:57.947573   30482 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:43:57.947575   30482 kubeadm.go:322] 
	I0717 21:43:57.947636   30482 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:43:57.947638   30482 kubeadm.go:322] 
	I0717 21:43:57.947660   30482 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:43:57.947709   30482 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:43:57.947753   30482 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:43:57.947755   30482 kubeadm.go:322] 
	I0717 21:43:57.947832   30482 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 21:43:57.947835   30482 kubeadm.go:322] 
	I0717 21:43:57.947869   30482 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 21:43:57.947872   30482 kubeadm.go:322] 
	I0717 21:43:57.947914   30482 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:43:57.947973   30482 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:43:57.948027   30482 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:43:57.948033   30482 kubeadm.go:322] 
	I0717 21:43:57.948099   30482 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:43:57.948158   30482 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:43:57.948162   30482 kubeadm.go:322] 
	I0717 21:43:57.948245   30482 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ir1t8l.7ihf14st871v1quk \
	I0717 21:43:57.948343   30482 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e689e19472dae50aa2cf886e41f52c0fd80f34a719435911b1bb4ef0d359ff11 \
	I0717 21:43:57.948362   30482 kubeadm.go:322] 	--control-plane 
	I0717 21:43:57.948364   30482 kubeadm.go:322] 
	I0717 21:43:57.948448   30482 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:43:57.948451   30482 kubeadm.go:322] 
	I0717 21:43:57.948516   30482 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ir1t8l.7ihf14st871v1quk \
	I0717 21:43:57.948606   30482 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e689e19472dae50aa2cf886e41f52c0fd80f34a719435911b1bb4ef0d359ff11 
	I0717 21:43:57.948612   30482 cni.go:84] Creating CNI manager for ""
	I0717 21:43:57.948618   30482 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 21:43:57.950153   30482 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 21:43:57.951433   30482 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 21:43:57.955222   30482 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 21:43:57.955234   30482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 21:43:57.972414   30482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 21:43:58.620024   30482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:43:58.620107   30482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:43:58.620117   30482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=dockerenv-533852 minikube.k8s.io/updated_at=2023_07_17T21_43_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:43:58.626690   30482 ops.go:34] apiserver oom_adj: -16
	I0717 21:43:58.687293   30482 kubeadm.go:1081] duration metric: took 67.240252ms to wait for elevateKubeSystemPrivileges.
	I0717 21:43:58.695428   30482 kubeadm.go:406] StartCluster complete in 9.427013598s
	I0717 21:43:58.695450   30482 settings.go:142] acquiring lock: {Name:mk45b34d922783d9ed397984207aa31ca4281835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:58.695516   30482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 21:43:58.696354   30482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/kubeconfig: {Name:mk5af2760efebf8dd7d91150f7a763b04339dbdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:43:58.696575   30482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:43:58.696751   30482 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 21:43:58.696829   30482 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-533852"
	I0717 21:43:58.696843   30482 addons.go:231] Setting addon storage-provisioner=true in "dockerenv-533852"
	I0717 21:43:58.696862   30482 config.go:182] Loaded profile config "dockerenv-533852": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:43:58.696866   30482 addons.go:69] Setting default-storageclass=true in profile "dockerenv-533852"
	I0717 21:43:58.696890   30482 host.go:66] Checking if "dockerenv-533852" exists ...
	I0717 21:43:58.696892   30482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-533852"
	I0717 21:43:58.697177   30482 cli_runner.go:164] Run: docker container inspect dockerenv-533852 --format={{.State.Status}}
	I0717 21:43:58.697332   30482 cli_runner.go:164] Run: docker container inspect dockerenv-533852 --format={{.State.Status}}
	I0717 21:43:58.719331   30482 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:43:58.720722   30482 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:43:58.720730   30482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:43:58.720764   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:58.733862   30482 addons.go:231] Setting addon default-storageclass=true in "dockerenv-533852"
	I0717 21:43:58.733895   30482 host.go:66] Checking if "dockerenv-533852" exists ...
	I0717 21:43:58.734369   30482 cli_runner.go:164] Run: docker container inspect dockerenv-533852 --format={{.State.Status}}
	I0717 21:43:58.737510   30482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa Username:docker}
	I0717 21:43:58.752417   30482 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:43:58.752429   30482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:43:58.752480   30482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-533852
	I0717 21:43:58.768944   30482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/dockerenv-533852/id_rsa Username:docker}
	I0717 21:43:58.790252   30482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:43:58.845994   30482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:43:58.950136   30482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:43:59.242340   30482 kapi.go:248] "coredns" deployment in "kube-system" namespace and "dockerenv-533852" context rescaled to 1 replicas
	I0717 21:43:59.242367   30482 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 21:43:59.244158   30482 out.go:177] * Verifying Kubernetes components...
	I0717 21:43:59.245734   30482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:43:59.448257   30482 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 21:43:59.639019   30482 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 21:43:59.640533   30482 addons.go:502] enable addons completed in 943.782056ms: enabled=[storage-provisioner default-storageclass]
	I0717 21:43:59.639666   30482 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:43:59.640593   30482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:43:59.650576   30482 api_server.go:72] duration metric: took 408.182028ms to wait for apiserver process to appear ...
	I0717 21:43:59.650585   30482 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:43:59.650597   30482 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 21:43:59.655253   30482 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 21:43:59.656238   30482 api_server.go:141] control plane version: v1.27.3
	I0717 21:43:59.656249   30482 api_server.go:131] duration metric: took 5.659561ms to wait for apiserver health ...
	I0717 21:43:59.656254   30482 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:43:59.661173   30482 system_pods.go:59] 5 kube-system pods found
	I0717 21:43:59.661186   30482 system_pods.go:61] "etcd-dockerenv-533852" [c2cbdb67-2a6a-40e1-8c84-6dfc637ff2fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 21:43:59.661193   30482 system_pods.go:61] "kube-apiserver-dockerenv-533852" [af84a076-e7f2-434b-b400-be68dc368ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 21:43:59.661199   30482 system_pods.go:61] "kube-controller-manager-dockerenv-533852" [4785c641-c28b-49c7-86bc-d8cb241c54c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 21:43:59.661204   30482 system_pods.go:61] "kube-scheduler-dockerenv-533852" [0e09049e-6de3-4d79-9e40-6de75745aa00] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 21:43:59.661210   30482 system_pods.go:61] "storage-provisioner" [6c107e84-d432-4d63-831c-263a4df0c806] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 21:43:59.661214   30482 system_pods.go:74] duration metric: took 4.956093ms to wait for pod list to return data ...
	I0717 21:43:59.661220   30482 kubeadm.go:581] duration metric: took 418.82871ms to wait for : map[apiserver:true system_pods:true] ...
	I0717 21:43:59.661229   30482 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:43:59.663444   30482 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 21:43:59.663453   30482 node_conditions.go:123] node cpu capacity is 8
	I0717 21:43:59.663462   30482 node_conditions.go:105] duration metric: took 2.230301ms to run NodePressure ...
	I0717 21:43:59.663471   30482 start.go:228] waiting for startup goroutines ...
	I0717 21:43:59.663477   30482 start.go:233] waiting for cluster config update ...
	I0717 21:43:59.663485   30482 start.go:242] writing updated cluster config ...
	I0717 21:43:59.663693   30482 ssh_runner.go:195] Run: rm -f paused
	I0717 21:43:59.707384   30482 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 21:43:59.709541   30482 out.go:177] * Done! kubectl is now configured to use "dockerenv-533852" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	a791701a366bb       6e38f40d628db       Less than a second ago   Created             storage-provisioner       0                   a742d370df8da       storage-provisioner
	7d976a3dc7f05       08a0c939e61b7       19 seconds ago           Running             kube-apiserver            0                   a3c12462c2879       kube-apiserver-dockerenv-533852
	06d5c09cadaa0       7cffc01dba0e1       19 seconds ago           Running             kube-controller-manager   0                   86a3c2018cdd5       kube-controller-manager-dockerenv-533852
	b94352f537eca       41697ceeb70b3       19 seconds ago           Running             kube-scheduler            0                   6f0a7b268d20f       kube-scheduler-dockerenv-533852
	f4d0c1a2f5a4d       86b6af7dd652c       19 seconds ago           Running             etcd                      0                   0439d5b57eebe       etcd-dockerenv-533852
	
	* 
	* ==> containerd <==
	* Jul 17 21:43:52 dockerenv-533852 containerd[775]: time="2023-07-17T21:43:52.540253302Z" level=info msg="CreateContainer within sandbox \"a3c12462c2879c316b2bb5569fa9b816334d2bc70f49524b9c4b8a6a3bd752a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d976a3dc7f0584b016fceeb761615e5c660903fc5d44a80d1ade3e08a2870c4\""
	Jul 17 21:43:52 dockerenv-533852 containerd[775]: time="2023-07-17T21:43:52.540627347Z" level=info msg="StartContainer for \"7d976a3dc7f0584b016fceeb761615e5c660903fc5d44a80d1ade3e08a2870c4\""
	Jul 17 21:43:52 dockerenv-533852 containerd[775]: time="2023-07-17T21:43:52.657208340Z" level=info msg="StartContainer for \"b94352f537eca562a08d07e2f1266276093bfdd64d4558659dbb0a7c9443541b\" returns successfully"
	Jul 17 21:43:52 dockerenv-533852 containerd[775]: time="2023-07-17T21:43:52.657692144Z" level=info msg="StartContainer for \"f4d0c1a2f5a4d828be8fe960c04bb8f831a48d1bab84d0614f2fcdce07031a54\" returns successfully"
	Jul 17 21:43:52 dockerenv-533852 containerd[775]: time="2023-07-17T21:43:52.663172453Z" level=info msg="StartContainer for \"06d5c09cadaa02c5fa688c84d472df7c5baa801929f8f619ee18a58ce966bc26\" returns successfully"
	Jul 17 21:43:52 dockerenv-533852 containerd[775]: time="2023-07-17T21:43:52.666667837Z" level=info msg="StartContainer for \"7d976a3dc7f0584b016fceeb761615e5c660903fc5d44a80d1ade3e08a2870c4\" returns successfully"
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.474875438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:6c107e84-d432-4d63-831c-263a4df0c806,Namespace:kube-system,Attempt:0,}"
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.492233305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.492303836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.492326343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.492531149Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a742d370df8dabb0594532a58acb85ab24c41e0c95e06539480e7335af3e2d98 pid=1815 runtime=io.containerd.runc.v2
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.575712531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mvxm4,Uid:bdc0d18c-1547-4347-9a17-e3f530362bca,Namespace:kube-system,Attempt:0,}"
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.576372635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-vfndw,Uid:8b1d8f93-76fe-46cb-ab41-0acd227a7719,Namespace:kube-system,Attempt:0,}"
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.642988767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.643055040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.643064556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.643275261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27fa62a6f3dbc54543f4b054c886c91eebe58ba76b0570a55e7ae626fe207bbc pid=1883 runtime=io.containerd.runc.v2
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.649273239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.649342727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.649352362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.649528354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b54acc6535e95aaebe2d6d428d6312445017190ebf2e8d7a140fa6f2eaaf7f37 pid=1904 runtime=io.containerd.runc.v2
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.652336923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:6c107e84-d432-4d63-831c-263a4df0c806,Namespace:kube-system,Attempt:0,} returns sandbox id \"a742d370df8dabb0594532a58acb85ab24c41e0c95e06539480e7335af3e2d98\""
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.657052197Z" level=info msg="CreateContainer within sandbox \"a742d370df8dabb0594532a58acb85ab24c41e0c95e06539480e7335af3e2d98\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.667986717Z" level=info msg="CreateContainer within sandbox \"a742d370df8dabb0594532a58acb85ab24c41e0c95e06539480e7335af3e2d98\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a791701a366bb803b8e31c24271ed67b2e26fe476cc7d0ba5430eced0dd9d131\""
	Jul 17 21:44:11 dockerenv-533852 containerd[775]: time="2023-07-17T21:44:11.668492618Z" level=info msg="StartContainer for \"a791701a366bb803b8e31c24271ed67b2e26fe476cc7d0ba5430eced0dd9d131\""
	
	* 
	* ==> describe nodes <==
	* Name:               dockerenv-533852
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-533852
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=dockerenv-533852
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_43_58_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:43:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-533852
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:44:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:43:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:43:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:43:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:44:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-533852
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fa012d647884408b8f950eeb16f989f
	  System UUID:                09a84347-ff63-4b2c-9e3f-e25cadb9b302
	  Boot ID:                    946bcabb-ba91-4ad0-b465-f616832ff8d0
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-rhnbj                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     0s
	  kube-system                 etcd-dockerenv-533852                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13s
	  kube-system                 kindnet-vfndw                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      0s
	  kube-system                 kube-apiserver-dockerenv-533852             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  kube-system                 kube-controller-manager-dockerenv-533852    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  kube-system                 kube-proxy-mvxm4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  kube-system                 kube-scheduler-dockerenv-533852             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 20s)  kubelet          Node dockerenv-533852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 20s)  kubelet          Node dockerenv-533852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 20s)  kubelet          Node dockerenv-533852 status is now: NodeHasSufficientPID
	  Normal  Starting                 14s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s                kubelet          Node dockerenv-533852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s                kubelet          Node dockerenv-533852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s                kubelet          Node dockerenv-533852 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14s                kubelet          Node dockerenv-533852 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet          Node dockerenv-533852 status is now: NodeReady
	  Normal  RegisteredNode           1s                 node-controller  Node dockerenv-533852 event: Registered Node dockerenv-533852 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 21:17]  #2
	[  +0.001496]  #3
	[  +0.000024]  #4
	[  +0.003116] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003162] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002207] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.001905]  #5
	[  +0.000676]  #6
	[  +0.003303]  #7
	[  +0.061143] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.425844] i8042: Warning: Keylock active
	[  +0.007147] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003041] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000726] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000662] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001263] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +8.978016] kauditd_printk_skb: 36 callbacks suppressed
	
	* 
	* ==> etcd [f4d0c1a2f5a4d828be8fe960c04bb8f831a48d1bab84d0614f2fcdce07031a54] <==
	* {"level":"info","ts":"2023-07-17T21:43:52.738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-07-17T21:43:52.738Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-07-17T21:43:52.739Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T21:43:52.739Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-17T21:43:52.739Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-17T21:43:52.739Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T21:43:52.740Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T21:43:53.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T21:43:53.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T21:43:53.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-07-17T21:43:53.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T21:43:53.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T21:43:53.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-07-17T21:43:53.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T21:43:53.074Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:43:53.074Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T21:43:53.074Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-533852 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T21:43:53.074Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T21:43:53.074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T21:43:53.075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T21:43:53.075Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:43:53.075Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:43:53.075Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:43:53.076Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T21:43:53.076Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  21:44:12 up 26 min,  0 users,  load average: 0.41, 0.53, 0.27
	Linux dockerenv-533852 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7d976a3dc7f0584b016fceeb761615e5c660903fc5d44a80d1ade3e08a2870c4] <==
	* I0717 21:43:54.743624       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 21:43:54.743633       1 cache.go:39] Caches are synced for autoregister controller
	I0717 21:43:54.744077       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 21:43:54.744091       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 21:43:54.744556       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 21:43:54.744713       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 21:43:54.745018       1 controller.go:624] quota admission added evaluator for: namespaces
	I0717 21:43:54.831353       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 21:43:54.836890       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 21:43:55.432749       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 21:43:55.647209       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 21:43:55.650384       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 21:43:55.650406       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 21:43:55.999036       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 21:43:56.028895       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 21:43:56.153069       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 21:43:56.159267       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 21:43:56.160087       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 21:43:56.163560       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 21:43:56.745560       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 21:43:57.718698       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 21:43:57.728049       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 21:43:57.735504       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 21:44:11.232288       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0717 21:44:11.338226       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [06d5c09cadaa02c5fa688c84d472df7c5baa801929f8f619ee18a58ce966bc26] <==
	* I0717 21:44:10.529713       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 21:44:10.529864       1 shared_informer.go:318] Caches are synced for stateful set
	I0717 21:44:10.529880       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="dockerenv-533852"
	I0717 21:44:10.529930       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 21:44:10.529882       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 21:44:10.529999       1 taint_manager.go:211] "Sending events to api server"
	I0717 21:44:10.530000       1 event.go:307] "Event occurred" object="dockerenv-533852" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node dockerenv-533852 event: Registered Node dockerenv-533852 in Controller"
	I0717 21:44:10.530163       1 shared_informer.go:318] Caches are synced for TTL
	I0717 21:44:10.530252       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 21:44:10.535512       1 shared_informer.go:318] Caches are synced for service account
	I0717 21:44:10.569389       1 shared_informer.go:318] Caches are synced for disruption
	I0717 21:44:10.580838       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 21:44:10.582046       1 shared_informer.go:318] Caches are synced for deployment
	I0717 21:44:10.691528       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 21:44:10.696827       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0717 21:44:10.730580       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 21:44:10.733048       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 21:44:10.734737       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 21:44:11.055963       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 21:44:11.078359       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 21:44:11.078393       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 21:44:11.239713       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mvxm4"
	I0717 21:44:11.241260       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vfndw"
	I0717 21:44:11.344119       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1"
	I0717 21:44:11.538813       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-rhnbj"
	
	* 
	* ==> kube-scheduler [b94352f537eca562a08d07e2f1266276093bfdd64d4558659dbb0a7c9443541b] <==
	* E0717 21:43:54.834880       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:43:54.834852       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 21:43:54.834603       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:43:54.835046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 21:43:54.834446       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 21:43:54.835192       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 21:43:54.835298       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:43:54.835362       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 21:43:55.735975       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 21:43:55.736024       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 21:43:55.755601       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 21:43:55.755634       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 21:43:55.764699       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:43:55.764732       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 21:43:55.825855       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 21:43:55.825892       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 21:43:55.841249       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 21:43:55.841283       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 21:43:55.847214       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:43:55.847237       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:43:55.848104       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 21:43:55.848134       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 21:43:55.857734       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:43:55.857762       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0717 21:43:56.353735       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 21:44:10 dockerenv-533852 kubelet[1499]: I0717 21:44:10.542179    1499 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 21:44:10 dockerenv-533852 kubelet[1499]: I0717 21:44:10.689139    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6c107e84-d432-4d63-831c-263a4df0c806-tmp\") pod \"storage-provisioner\" (UID: \"6c107e84-d432-4d63-831c-263a4df0c806\") " pod="kube-system/storage-provisioner"
	Jul 17 21:44:10 dockerenv-533852 kubelet[1499]: I0717 21:44:10.689207    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5d8\" (UniqueName: \"kubernetes.io/projected/6c107e84-d432-4d63-831c-263a4df0c806-kube-api-access-4m5d8\") pod \"storage-provisioner\" (UID: \"6c107e84-d432-4d63-831c-263a4df0c806\") " pod="kube-system/storage-provisioner"
	Jul 17 21:44:10 dockerenv-533852 kubelet[1499]: E0717 21:44:10.795000    1499 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 21:44:10 dockerenv-533852 kubelet[1499]: E0717 21:44:10.795030    1499 projected.go:198] Error preparing data for projected volume kube-api-access-4m5d8 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 17 21:44:10 dockerenv-533852 kubelet[1499]: E0717 21:44:10.795099    1499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c107e84-d432-4d63-831c-263a4df0c806-kube-api-access-4m5d8 podName:6c107e84-d432-4d63-831c-263a4df0c806 nodeName:}" failed. No retries permitted until 2023-07-17 21:44:11.295076322 +0000 UTC m=+13.600757822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4m5d8" (UniqueName: "kubernetes.io/projected/6c107e84-d432-4d63-831c-263a4df0c806-kube-api-access-4m5d8") pod "storage-provisioner" (UID: "6c107e84-d432-4d63-831c-263a4df0c806") : configmap "kube-root-ca.crt" not found
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.244845    1499 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.248167    1499 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432018    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdc0d18c-1547-4347-9a17-e3f530362bca-xtables-lock\") pod \"kube-proxy-mvxm4\" (UID: \"bdc0d18c-1547-4347-9a17-e3f530362bca\") " pod="kube-system/kube-proxy-mvxm4"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432117    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdc0d18c-1547-4347-9a17-e3f530362bca-lib-modules\") pod \"kube-proxy-mvxm4\" (UID: \"bdc0d18c-1547-4347-9a17-e3f530362bca\") " pod="kube-system/kube-proxy-mvxm4"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432149    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6886\" (UniqueName: \"kubernetes.io/projected/bdc0d18c-1547-4347-9a17-e3f530362bca-kube-api-access-k6886\") pod \"kube-proxy-mvxm4\" (UID: \"bdc0d18c-1547-4347-9a17-e3f530362bca\") " pod="kube-system/kube-proxy-mvxm4"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432186    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bdc0d18c-1547-4347-9a17-e3f530362bca-kube-proxy\") pod \"kube-proxy-mvxm4\" (UID: \"bdc0d18c-1547-4347-9a17-e3f530362bca\") " pod="kube-system/kube-proxy-mvxm4"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432262    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sn7\" (UniqueName: \"kubernetes.io/projected/8b1d8f93-76fe-46cb-ab41-0acd227a7719-kube-api-access-q7sn7\") pod \"kindnet-vfndw\" (UID: \"8b1d8f93-76fe-46cb-ab41-0acd227a7719\") " pod="kube-system/kindnet-vfndw"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432378    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b1d8f93-76fe-46cb-ab41-0acd227a7719-cni-cfg\") pod \"kindnet-vfndw\" (UID: \"8b1d8f93-76fe-46cb-ab41-0acd227a7719\") " pod="kube-system/kindnet-vfndw"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432430    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b1d8f93-76fe-46cb-ab41-0acd227a7719-lib-modules\") pod \"kindnet-vfndw\" (UID: \"8b1d8f93-76fe-46cb-ab41-0acd227a7719\") " pod="kube-system/kindnet-vfndw"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.432500    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b1d8f93-76fe-46cb-ab41-0acd227a7719-xtables-lock\") pod \"kindnet-vfndw\" (UID: \"8b1d8f93-76fe-46cb-ab41-0acd227a7719\") " pod="kube-system/kindnet-vfndw"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.544207    1499 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.735116    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23e201f-3c18-4db5-aa0b-03934d0aed31-config-volume\") pod \"coredns-5d78c9869d-rhnbj\" (UID: \"c23e201f-3c18-4db5-aa0b-03934d0aed31\") " pod="kube-system/coredns-5d78c9869d-rhnbj"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.735195    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45vwd\" (UniqueName: \"kubernetes.io/projected/c23e201f-3c18-4db5-aa0b-03934d0aed31-kube-api-access-45vwd\") pod \"coredns-5d78c9869d-rhnbj\" (UID: \"c23e201f-3c18-4db5-aa0b-03934d0aed31\") " pod="kube-system/coredns-5d78c9869d-rhnbj"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: E0717 21:44:11.932900    1499 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\": failed to find network info for sandbox \"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\""
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: E0717 21:44:11.932976    1499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\": failed to find network info for sandbox \"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\"" pod="kube-system/coredns-5d78c9869d-rhnbj"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: E0717 21:44:11.933002    1499 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\": failed to find network info for sandbox \"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\"" pod="kube-system/coredns-5d78c9869d-rhnbj"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: E0717 21:44:11.933064    1499 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-rhnbj_kube-system(c23e201f-3c18-4db5-aa0b-03934d0aed31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-rhnbj_kube-system(c23e201f-3c18-4db5-aa0b-03934d0aed31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\\\": failed to find network info for sandbox \\\"0dff7ce0d87366698f7f29acdd85d8b7ff18004941b28e292425f7172ec2007b\\\"\"" pod="kube-system/coredns-5d78c9869d-rhnbj" podUID=c23e201f-3c18-4db5-aa0b-03934d0aed31
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.945125    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.945081735 podCreationTimestamp="2023-07-17 21:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 21:44:11.945045624 +0000 UTC m=+14.250727127" watchObservedRunningTime="2023-07-17 21:44:11.945081735 +0000 UTC m=+14.250763239"
	Jul 17 21:44:11 dockerenv-533852 kubelet[1499]: I0717 21:44:11.945224    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mvxm4" podStartSLOduration=0.945192472 podCreationTimestamp="2023-07-17 21:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 21:44:11.93289536 +0000 UTC m=+14.238576863" watchObservedRunningTime="2023-07-17 21:44:11.945192472 +0000 UTC m=+14.250873979"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-533852 -n dockerenv-533852
helpers_test.go:261: (dbg) Run:  kubectl --context dockerenv-533852 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5d78c9869d-rhnbj kindnet-vfndw
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context dockerenv-533852 describe pod coredns-5d78c9869d-rhnbj kindnet-vfndw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-533852 describe pod coredns-5d78c9869d-rhnbj kindnet-vfndw: exit status 1 (55.224845ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5d78c9869d-rhnbj" not found
	Error from server (NotFound): pods "kindnet-vfndw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-533852 describe pod coredns-5d78c9869d-rhnbj kindnet-vfndw: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-533852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-533852
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-533852: (1.825558835s)
--- FAIL: TestDockerEnvContainerd (34.87s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (246.157251ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (156.41s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.22.0.3244981091.exe start -p missing-upgrade-863015 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.22.0.3244981091.exe start -p missing-upgrade-863015 --memory=2200 --driver=docker  --container-runtime=containerd: (1m20.395808678s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-863015
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-863015: (10.360107493s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-863015
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-863015 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p missing-upgrade-863015 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 90 (58.575081643s)

                                                
                                                
-- stdout --
	* [missing-upgrade-863015] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-863015 in cluster missing-upgrade-863015
	* Pulling base image ...
	* docker "missing-upgrade-863015" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:09:12.290118  190811 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:09:12.290243  190811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:09:12.290252  190811 out.go:309] Setting ErrFile to fd 2...
	I0717 22:09:12.290259  190811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:09:12.290482  190811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 22:09:12.290984  190811 out.go:303] Setting JSON to false
	I0717 22:09:12.292376  190811 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3098,"bootTime":1689628654,"procs":737,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:09:12.292437  190811 start.go:138] virtualization: kvm guest
	I0717 22:09:12.294523  190811 out.go:177] * [missing-upgrade-863015] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:09:12.296220  190811 notify.go:220] Checking for updates...
	I0717 22:09:12.296225  190811 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:09:12.297592  190811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:09:12.298971  190811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 22:09:12.300370  190811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 22:09:12.301632  190811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:09:12.302928  190811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:09:12.304583  190811 config.go:182] Loaded profile config "missing-upgrade-863015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 22:09:12.306427  190811 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 22:09:12.307716  190811 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:09:12.329494  190811 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:09:12.329617  190811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:09:12.394098  190811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:77 SystemTime:2023-07-17 22:09:12.381472749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:09:12.394230  190811 docker.go:294] overlay module found
	I0717 22:09:12.396104  190811 out.go:177] * Using the docker driver based on existing profile
	I0717 22:09:12.397385  190811 start.go:298] selected driver: docker
	I0717 22:09:12.397402  190811 start.go:880] validating driver "docker" against &{Name:missing-upgrade-863015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-863015 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:09:12.397524  190811 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:09:12.398591  190811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:09:12.455309  190811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:77 SystemTime:2023-07-17 22:09:12.445951122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:09:12.455616  190811 cni.go:84] Creating CNI manager for ""
	I0717 22:09:12.455634  190811 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 22:09:12.455653  190811 start_flags.go:319] config:
	{Name:missing-upgrade-863015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-863015 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0}
	I0717 22:09:12.457638  190811 out.go:177] * Starting control plane node missing-upgrade-863015 in cluster missing-upgrade-863015
	I0717 22:09:12.459005  190811 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 22:09:12.460274  190811 out.go:177] * Pulling base image ...
	I0717 22:09:12.461479  190811 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0717 22:09:12.461511  190811 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4
	I0717 22:09:12.461522  190811 cache.go:57] Caching tarball of preloaded images
	I0717 22:09:12.461597  190811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0717 22:09:12.461623  190811 preload.go:174] Found /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 22:09:12.461636  190811 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0717 22:09:12.461781  190811 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/missing-upgrade-863015/config.json ...
	I0717 22:09:12.478791  190811 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0717 22:09:12.478819  190811 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0717 22:09:12.478841  190811 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:09:12.478900  190811 start.go:365] acquiring machines lock for missing-upgrade-863015: {Name:mk6ce1462422279066202d728c946c73962a9a27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:09:12.478968  190811 start.go:369] acquired machines lock for "missing-upgrade-863015" in 36.159µs
	I0717 22:09:12.478989  190811 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:09:12.478995  190811 fix.go:54] fixHost starting: 
	I0717 22:09:12.479194  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:12.494457  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:12.494496  190811 fix.go:102] recreateIfNeeded on missing-upgrade-863015: state= err=unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:12.494510  190811 fix.go:107] machineExists: false. err=machine does not exist
	I0717 22:09:12.496427  190811 out.go:177] * docker "missing-upgrade-863015" container is missing, will recreate.
	I0717 22:09:12.497915  190811 delete.go:124] DEMOLISHING missing-upgrade-863015 ...
	I0717 22:09:12.497983  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:12.517240  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	W0717 22:09:12.517307  190811 stop.go:75] unable to get state: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:12.517337  190811 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:12.517686  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:12.532344  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:12.532399  190811 delete.go:82] Unable to get host status for missing-upgrade-863015, assuming it has already been deleted: state: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:12.532465  190811 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-863015
	W0717 22:09:12.546824  190811 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-863015 returned with exit code 1
	I0717 22:09:12.546855  190811 kic.go:367] could not find the container missing-upgrade-863015 to remove it. will try anyways
	I0717 22:09:12.546892  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:12.562471  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	W0717 22:09:12.562532  190811 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:12.562588  190811 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-863015 /bin/bash -c "sudo init 0"
	W0717 22:09:12.581322  190811 cli_runner.go:211] docker exec --privileged -t missing-upgrade-863015 /bin/bash -c "sudo init 0" returned with exit code 1
	I0717 22:09:12.581357  190811 oci.go:647] error shutdown missing-upgrade-863015: docker exec --privileged -t missing-upgrade-863015 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:13.581497  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:13.597301  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:13.597362  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:13.597372  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:13.597404  190811 retry.go:31] will retry after 337.887291ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:13.935723  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:13.951530  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:13.951597  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:13.951614  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:13.951637  190811 retry.go:31] will retry after 718.175962ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:14.670513  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:14.686771  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:14.686846  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:14.686872  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:14.686905  190811 retry.go:31] will retry after 758.873535ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:15.446805  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:15.463634  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:15.463688  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:15.463697  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:15.463721  190811 retry.go:31] will retry after 1.908439076s: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:17.372799  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:17.390374  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:17.390442  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:17.390458  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:17.390486  190811 retry.go:31] will retry after 1.98844143s: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:19.379939  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:19.396284  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:19.396347  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:19.396364  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:19.396402  190811 retry.go:31] will retry after 2.840414732s: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:22.237849  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:22.255101  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:22.255161  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:22.255177  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:22.255212  190811 retry.go:31] will retry after 3.692126613s: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:25.947903  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:25.964103  190811 cli_runner.go:211] docker container inspect missing-upgrade-863015 --format={{.State.Status}} returned with exit code 1
	I0717 22:09:25.964166  190811 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	I0717 22:09:25.964182  190811 oci.go:661] temporary error: container missing-upgrade-863015 status is  but expect it to be exited
	I0717 22:09:25.964232  190811 oci.go:88] couldn't shut down missing-upgrade-863015 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-863015": docker container inspect missing-upgrade-863015 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-863015
	 
	I0717 22:09:25.964304  190811 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-863015
	I0717 22:09:25.981800  190811 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-863015
	W0717 22:09:25.997173  190811 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-863015 returned with exit code 1
	I0717 22:09:25.997282  190811 cli_runner.go:164] Run: docker network inspect missing-upgrade-863015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:09:26.013335  190811 cli_runner.go:164] Run: docker network rm missing-upgrade-863015
	I0717 22:09:26.126943  190811 fix.go:114] Sleeping 1 second for extra luck!
	I0717 22:09:27.127098  190811 start.go:125] createHost starting for "" (driver="docker")
	I0717 22:09:27.129379  190811 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 22:09:27.129518  190811 start.go:159] libmachine.API.Create for "missing-upgrade-863015" (driver="docker")
	I0717 22:09:27.129548  190811 client.go:168] LocalClient.Create starting
	I0717 22:09:27.129657  190811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem
	I0717 22:09:27.129703  190811 main.go:141] libmachine: Decoding PEM data...
	I0717 22:09:27.129723  190811 main.go:141] libmachine: Parsing certificate...
	I0717 22:09:27.129791  190811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem
	I0717 22:09:27.129823  190811 main.go:141] libmachine: Decoding PEM data...
	I0717 22:09:27.129842  190811 main.go:141] libmachine: Parsing certificate...
	I0717 22:09:27.130168  190811 cli_runner.go:164] Run: docker network inspect missing-upgrade-863015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 22:09:27.146707  190811 cli_runner.go:211] docker network inspect missing-upgrade-863015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 22:09:27.146763  190811 network_create.go:281] running [docker network inspect missing-upgrade-863015] to gather additional debugging logs...
	I0717 22:09:27.146790  190811 cli_runner.go:164] Run: docker network inspect missing-upgrade-863015
	W0717 22:09:27.162401  190811 cli_runner.go:211] docker network inspect missing-upgrade-863015 returned with exit code 1
	I0717 22:09:27.162429  190811 network_create.go:284] error running [docker network inspect missing-upgrade-863015]: docker network inspect missing-upgrade-863015: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-863015 not found
	I0717 22:09:27.162444  190811 network_create.go:286] output of [docker network inspect missing-upgrade-863015]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-863015 not found
	
	** /stderr **
	I0717 22:09:27.162500  190811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:09:27.179530  190811 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9e70a8cfc12f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:04:c2:84:0a} reservation:<nil>}
	I0717 22:09:27.180718  190811 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbdb6a80ee68 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:a3:4e:1e:0c} reservation:<nil>}
	I0717 22:09:27.181584  190811 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2a9f57b99f1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:2e:38:cc:d7} reservation:<nil>}
	I0717 22:09:27.182549  190811 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001510cc0}
	I0717 22:09:27.182573  190811 network_create.go:123] attempt to create docker network missing-upgrade-863015 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0717 22:09:27.182622  190811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-863015 missing-upgrade-863015
	I0717 22:09:27.236543  190811 network_create.go:107] docker network missing-upgrade-863015 192.168.76.0/24 created
	I0717 22:09:27.236577  190811 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-863015" container
	I0717 22:09:27.236638  190811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 22:09:27.252189  190811 cli_runner.go:164] Run: docker volume create missing-upgrade-863015 --label name.minikube.sigs.k8s.io=missing-upgrade-863015 --label created_by.minikube.sigs.k8s.io=true
	I0717 22:09:27.267671  190811 oci.go:103] Successfully created a docker volume missing-upgrade-863015
	I0717 22:09:27.267756  190811 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-863015-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-863015 --entrypoint /usr/bin/test -v missing-upgrade-863015:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0717 22:09:27.703779  190811 oci.go:107] Successfully prepared a docker volume missing-upgrade-863015
	I0717 22:09:27.703843  190811 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0717 22:09:27.703866  190811 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 22:09:27.703958  190811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-863015:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 22:09:33.475895  190811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-863015:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.771880093s)
	I0717 22:09:33.475927  190811 kic.go:199] duration metric: took 5.772057 seconds to extract preloaded images to volume
	W0717 22:09:33.476063  190811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 22:09:33.476159  190811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 22:09:33.537748  190811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-863015 --name missing-upgrade-863015 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-863015 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-863015 --network missing-upgrade-863015 --ip 192.168.76.2 --volume missing-upgrade-863015:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0717 22:09:33.855265  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Running}}
	I0717 22:09:33.874638  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	I0717 22:09:33.895028  190811 cli_runner.go:164] Run: docker exec missing-upgrade-863015 stat /var/lib/dpkg/alternatives/iptables
	I0717 22:09:33.954085  190811 oci.go:144] the created container "missing-upgrade-863015" has a running status.
	I0717 22:09:33.954120  190811 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa...
	I0717 22:09:34.170703  190811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 22:09:34.193207  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	I0717 22:09:34.210829  190811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 22:09:34.210852  190811 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-863015 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 22:09:34.272661  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	I0717 22:09:34.289141  190811 machine.go:88] provisioning docker machine ...
	I0717 22:09:34.289202  190811 ubuntu.go:169] provisioning hostname "missing-upgrade-863015"
	I0717 22:09:34.289282  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:34.306284  190811 main.go:141] libmachine: Using SSH client type: native
	I0717 22:09:34.306827  190811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0717 22:09:34.306849  190811 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-863015 && echo "missing-upgrade-863015" | sudo tee /etc/hostname
	I0717 22:09:34.428344  190811 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-863015
	
	I0717 22:09:34.428428  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:34.445666  190811 main.go:141] libmachine: Using SSH client type: native
	I0717 22:09:34.446059  190811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0717 22:09:34.446081  190811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-863015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-863015/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-863015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:09:34.567420  190811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:09:34.567449  190811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-6342/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-6342/.minikube}
	I0717 22:09:34.567488  190811 ubuntu.go:177] setting up certificates
	I0717 22:09:34.567505  190811 provision.go:83] configureAuth start
	I0717 22:09:34.567558  190811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-863015
	I0717 22:09:34.583237  190811 provision.go:138] copyHostCerts
	I0717 22:09:34.583291  190811 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem, removing ...
	I0717 22:09:34.583300  190811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem
	I0717 22:09:34.583359  190811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem (1082 bytes)
	I0717 22:09:34.583472  190811 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem, removing ...
	I0717 22:09:34.583483  190811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem
	I0717 22:09:34.583507  190811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem (1123 bytes)
	I0717 22:09:34.583559  190811 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem, removing ...
	I0717 22:09:34.583567  190811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem
	I0717 22:09:34.583587  190811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem (1675 bytes)
	I0717 22:09:34.583644  190811 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-863015 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-863015]
	I0717 22:09:34.703531  190811 provision.go:172] copyRemoteCerts
	I0717 22:09:34.703580  190811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:09:34.703619  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:34.719561  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	I0717 22:09:34.802733  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 22:09:34.819312  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:09:34.835689  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:09:34.852021  190811 provision.go:86] duration metric: configureAuth took 284.507908ms
	I0717 22:09:34.852040  190811 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:09:34.852203  190811 config.go:182] Loaded profile config "missing-upgrade-863015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 22:09:34.852232  190811 machine.go:91] provisioned docker machine in 563.053687ms
	I0717 22:09:34.852248  190811 client.go:171] LocalClient.Create took 7.722689698s
	I0717 22:09:34.852271  190811 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-863015" took 7.722753658s
	I0717 22:09:34.852280  190811 start.go:300] post-start starting for "missing-upgrade-863015" (driver="docker")
	I0717 22:09:34.852287  190811 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:09:34.852346  190811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:09:34.852392  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:34.869135  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	I0717 22:09:34.954833  190811 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:09:34.957557  190811 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:09:34.957577  190811 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:09:34.957586  190811 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:09:34.957591  190811 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0717 22:09:34.957601  190811 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/addons for local assets ...
	I0717 22:09:34.957666  190811 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/files for local assets ...
	I0717 22:09:34.957755  190811 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem -> 131632.pem in /etc/ssl/certs
	I0717 22:09:34.957870  190811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:09:34.964035  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem --> /etc/ssl/certs/131632.pem (1708 bytes)
	I0717 22:09:34.981469  190811 start.go:303] post-start completed in 129.173426ms
	I0717 22:09:34.981859  190811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-863015
	I0717 22:09:34.997968  190811 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/missing-upgrade-863015/config.json ...
	I0717 22:09:34.998244  190811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:09:34.998298  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:35.014082  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	I0717 22:09:35.100272  190811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:09:35.104509  190811 start.go:128] duration metric: createHost completed in 7.977380301s
	I0717 22:09:35.104582  190811 cli_runner.go:164] Run: docker container inspect missing-upgrade-863015 --format={{.State.Status}}
	W0717 22:09:35.121127  190811 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:09:35.121158  190811 machine.go:88] provisioning docker machine ...
	I0717 22:09:35.121180  190811 ubuntu.go:169] provisioning hostname "missing-upgrade-863015"
	I0717 22:09:35.121241  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:35.137784  190811 main.go:141] libmachine: Using SSH client type: native
	I0717 22:09:35.138178  190811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0717 22:09:35.138193  190811 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-863015 && echo "missing-upgrade-863015" | sudo tee /etc/hostname
	I0717 22:09:35.263500  190811 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-863015
	
	I0717 22:09:35.263583  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:35.279949  190811 main.go:141] libmachine: Using SSH client type: native
	I0717 22:09:35.280345  190811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0717 22:09:35.280363  190811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-863015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-863015/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-863015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:09:35.399348  190811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:09:35.399370  190811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-6342/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-6342/.minikube}
	I0717 22:09:35.399389  190811 ubuntu.go:177] setting up certificates
	I0717 22:09:35.399397  190811 provision.go:83] configureAuth start
	I0717 22:09:35.399438  190811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-863015
	I0717 22:09:35.415371  190811 provision.go:138] copyHostCerts
	I0717 22:09:35.415421  190811 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem, removing ...
	I0717 22:09:35.415430  190811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem
	I0717 22:09:35.415477  190811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem (1123 bytes)
	I0717 22:09:35.415563  190811 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem, removing ...
	I0717 22:09:35.415572  190811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem
	I0717 22:09:35.415589  190811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem (1675 bytes)
	I0717 22:09:35.415634  190811 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem, removing ...
	I0717 22:09:35.415641  190811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem
	I0717 22:09:35.415657  190811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem (1082 bytes)
	I0717 22:09:35.415703  190811 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-863015 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-863015]
	I0717 22:09:35.604521  190811 provision.go:172] copyRemoteCerts
	I0717 22:09:35.604582  190811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:09:35.604618  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:35.621848  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	I0717 22:09:35.708178  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 22:09:35.732904  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:09:35.752366  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:09:35.768862  190811 provision.go:86] duration metric: configureAuth took 369.454788ms
	I0717 22:09:35.768886  190811 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:09:35.769084  190811 config.go:182] Loaded profile config "missing-upgrade-863015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 22:09:35.769097  190811 machine.go:91] provisioned docker machine in 647.931784ms
	I0717 22:09:35.769104  190811 start.go:300] post-start starting for "missing-upgrade-863015" (driver="docker")
	I0717 22:09:35.769115  190811 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:09:35.769165  190811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:09:35.769204  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:35.786044  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	I0717 22:09:35.870706  190811 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:09:35.873416  190811 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:09:35.873444  190811 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:09:35.873458  190811 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:09:35.873468  190811 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0717 22:09:35.873481  190811 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/addons for local assets ...
	I0717 22:09:35.873539  190811 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/files for local assets ...
	I0717 22:09:35.873631  190811 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem -> 131632.pem in /etc/ssl/certs
	I0717 22:09:35.873735  190811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:09:35.880366  190811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem --> /etc/ssl/certs/131632.pem (1708 bytes)
	I0717 22:09:35.896940  190811 start.go:303] post-start completed in 127.824773ms
	I0717 22:09:35.897007  190811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:09:35.897056  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:35.913916  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	I0717 22:09:35.996149  190811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:09:35.999900  190811 fix.go:56] fixHost completed within 23.520899659s
	I0717 22:09:35.999925  190811 start.go:83] releasing machines lock for "missing-upgrade-863015", held for 23.520944866s
	I0717 22:09:35.999979  190811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-863015
	I0717 22:09:36.018362  190811 ssh_runner.go:195] Run: cat /version.json
	I0717 22:09:36.018418  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:36.018453  190811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:09:36.018534  190811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-863015
	I0717 22:09:36.038208  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	I0717 22:09:36.040003  190811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/missing-upgrade-863015/id_rsa Username:docker}
	W0717 22:09:36.122993  190811 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 22:09:36.123055  190811 ssh_runner.go:195] Run: systemctl --version
	I0717 22:09:36.158997  190811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:09:36.163916  190811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 22:09:36.184117  190811 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:09:36.184198  190811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:09:36.205312  190811 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:09:36.205331  190811 start.go:466] detecting cgroup driver to use...
	I0717 22:09:36.205362  190811 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:09:36.205406  190811 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 22:09:36.216719  190811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 22:09:36.225181  190811 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:09:36.225232  190811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:09:36.233694  190811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:09:36.243332  190811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 22:09:36.253249  190811 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 22:09:36.253303  190811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:09:36.340980  190811 docker.go:212] disabling docker service ...
	I0717 22:09:36.341044  190811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:09:36.363063  190811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:09:36.376587  190811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:09:36.476538  190811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:09:36.563026  190811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:09:36.574191  190811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:09:36.586994  190811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0717 22:09:36.594509  190811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 22:09:36.603000  190811 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 22:09:36.603048  190811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 22:09:36.610356  190811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:09:36.617472  190811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 22:09:36.624598  190811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:09:36.632100  190811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:09:36.642345  190811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 22:09:36.651682  190811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:09:36.660648  190811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:09:36.669482  190811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:09:36.747565  190811 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 22:09:36.820507  190811 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 22:09:36.820572  190811 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 22:09:36.824076  190811 start.go:534] Will wait 60s for crictl version
	I0717 22:09:36.824128  190811 ssh_runner.go:195] Run: which crictl
	I0717 22:09:36.826988  190811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:09:36.865030  190811 retry.go:31] will retry after 14.974018184s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:09:36Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 22:09:51.839928  190811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:09:51.870016  190811 retry.go:31] will retry after 18.919986612s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:09:51Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 22:10:10.791911  190811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:10:10.816516  190811 out.go:177] 
	W0717 22:10:10.817792  190811 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0717 22:10:10.817806  190811 out.go:239] * 
	* 
	W0717 22:10:10.818547  190811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:10:10.820135  190811 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.22.0. args: out/minikube-linux-amd64 start -p missing-upgrade-863015 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-07-17 22:10:10.847642071 +0000 UTC m=+1911.222515011
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-863015
helpers_test.go:235: (dbg) docker inspect missing-upgrade-863015:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcb5b4d5d2dc4cd14a36b9ae25d2fa493da4b5e1a391ee39554e1415530e7bfb",
	        "Created": "2023-07-17T22:09:33.555806853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195114,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:09:33.847501986Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/bcb5b4d5d2dc4cd14a36b9ae25d2fa493da4b5e1a391ee39554e1415530e7bfb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcb5b4d5d2dc4cd14a36b9ae25d2fa493da4b5e1a391ee39554e1415530e7bfb/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcb5b4d5d2dc4cd14a36b9ae25d2fa493da4b5e1a391ee39554e1415530e7bfb/hosts",
	        "LogPath": "/var/lib/docker/containers/bcb5b4d5d2dc4cd14a36b9ae25d2fa493da4b5e1a391ee39554e1415530e7bfb/bcb5b4d5d2dc4cd14a36b9ae25d2fa493da4b5e1a391ee39554e1415530e7bfb-json.log",
	        "Name": "/missing-upgrade-863015",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-863015:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "missing-upgrade-863015",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1dfc1a4920fe1b595e6bf3103fce176b24d5b8a85badf0ae42a3257999542d00-init/diff:/var/lib/docker/overlay2/300d23be8de51c6ae9fdb855d0b6fc7fc8e7501ee4154765f76c11d0e80b7bb4/diff:/var/lib/docker/overlay2/a2c936d5c9015f32cde1ac11e5cc96449fd70aadc3ecedb8c8ac4fabbb6a848d/diff:/var/lib/docker/overlay2/e57403f5e4e4cc284396f6ad1c95b800ca7544ce40ed1c29b5f6270be82360ac/diff:/var/lib/docker/overlay2/b6ff296ca804312ff88b79aa14bf8195b741bbc065a9c1a94ab80b2460ef6d17/diff:/var/lib/docker/overlay2/474fa823781bc7cbe46f741ee46040e8d30679c78364c176b15ae892b7785768/diff:/var/lib/docker/overlay2/e212ceb6c08003bcfed3b2533e64547a76e58b3d21159ba6a49f3a3b404c2adf/diff:/var/lib/docker/overlay2/02bd842492354286a1f750fa5ed8b6c4733864d7986a2dd59b66a84a5429ceed/diff:/var/lib/docker/overlay2/dc6314dac7dca8871827a259c23467305ca92de2120f67c4a4187b7ebb79af5d/diff:/var/lib/docker/overlay2/1b2f87f37b2c5d9ffd8fe56921be2bbf53a18975bee9e064ad3036011fc328a7/diff:/var/lib/docker/overlay2/870ade
d27e156a3ca41455002c43f5133129f9667608bff539d614ab68d97106/diff:/var/lib/docker/overlay2/55accea042c3f7de71a24a22c6aca24e57bbb9e6efe0701fe3a1b15bf6519839/diff:/var/lib/docker/overlay2/bbb90969d3f1943300c0294c57b4f156fc0fc0591164a5f8c9e6c055d79d84e5/diff:/var/lib/docker/overlay2/a791865e35092e2673c5bc065d9cff3f1e3b59f216e770be4e823270911c21bd/diff:/var/lib/docker/overlay2/2d5dcb44ae066f2dd7ed490aee03bc1a21931c332c8c1179b36a13e19a226d4c/diff:/var/lib/docker/overlay2/a984b6d920d607e35b92d0bf744426dd8b81ec1b399656f56c6bb2b5ce0ac28a/diff:/var/lib/docker/overlay2/ba81a5f5630574e06054e06155fd9c17bdc0dd1cbf2c9b5f452954625eb3e76e/diff:/var/lib/docker/overlay2/3b15c60a0f892d294174030dbbd94a3b9835c25e905f02de3d50e01e8a6ca2b8/diff:/var/lib/docker/overlay2/040c99834d1fb37f7a077c3bc437cf3923fe80901451e0d084d9c6d97765c4a7/diff:/var/lib/docker/overlay2/f44d25ecfcbaa9ceee9dde7e130ef22bf2263759012b0e04f131aea9dccac09e/diff:/var/lib/docker/overlay2/9e00866ed6498d5c3b271d5bef9679931b21d876ab9df7227ae8c70ab0a6f311/diff:/var/lib/d
ocker/overlay2/fab151343aad08d7fe58a06729cbabfa75d4ce71e13b8dbeba8561305903b476/diff:/var/lib/docker/overlay2/e028ea4fd646b545368b7350cad7ef90a37063f9f3e1ed595f9733434f4d6b24/diff:/var/lib/docker/overlay2/52ae6ed95c445a16cc12e865cf8b7b065e8b648df6b22d28184e5654194a1d84/diff:/var/lib/docker/overlay2/59a6b9b31c2a280666a1dcbe276ed3ed183f62973d45e9f7407426b4eea4c8ac/diff:/var/lib/docker/overlay2/d554ccab0a71b2b18d700098c520facd4b642986c6faad9f1a5625c4a95a3c5b/diff:/var/lib/docker/overlay2/fecb9626e01a99de56572948bd08ba62b5438677984c6cbe222c0196c8d78b40/diff:/var/lib/docker/overlay2/d276227d815b03d6b9f5d717d59133a758e4c9214c275ee178fa68c58b1626e8/diff:/var/lib/docker/overlay2/5d6fc0914d4666a10f38b93d7f177678b878dfb261fa03c664f8ecba196f5858/diff:/var/lib/docker/overlay2/eb744c8092a041f12f9742acbc65a2a7ee1aa074a9a27c75525e94f12a0e8fad/diff:/var/lib/docker/overlay2/9f8c700183395b0a63e759ab421758af8a1012811e09c62c58d772d3ab437e76/diff:/var/lib/docker/overlay2/6f95605afdddc3591cd7bcd29531348013068827ac662488d0e22f7b423
146eb/diff:/var/lib/docker/overlay2/e27377cc9099530521ba08520ff57a7eb1b66534a0c32c6dbc1f7e535d10a9df/diff:/var/lib/docker/overlay2/15a052169ddf1e08f48f4f3dcad7b9f58f562a871255544d4acabbd8cc842788/diff:/var/lib/docker/overlay2/31a388d78a7b79e62bb145f843cc8ef635c2795eabde54ddab5d46916976e031/diff:/var/lib/docker/overlay2/69d4ed47b6e690303656247935e624ecaf7eed2ab14258819bc5fc8130c6e498/diff:/var/lib/docker/overlay2/6b813d23e2760db80b4ee812342e307ba16fc3f3ce4eac650c25e883d9bc8127/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1dfc1a4920fe1b595e6bf3103fce176b24d5b8a85badf0ae42a3257999542d00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1dfc1a4920fe1b595e6bf3103fce176b24d5b8a85badf0ae42a3257999542d00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1dfc1a4920fe1b595e6bf3103fce176b24d5b8a85badf0ae42a3257999542d00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-863015",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-863015/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-863015",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-863015",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-863015",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "676e52c2e96ca829d468819f6eb87ecbce66d46a42a890ef707e040ddb0e0a4e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32997"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32996"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32993"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32995"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32994"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/676e52c2e96c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-863015": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bcb5b4d5d2dc",
	                        "missing-upgrade-863015"
	                    ],
	                    "NetworkID": "f54a4da66ff596d133c50bbb24544de106262e11137dc1d62a496bb6f04f9e97",
	                    "EndpointID": "52a360e52faa56a46dcfba2bb8b15a761f71f4ff0102b3c7ce1c16061ac3388f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p missing-upgrade-863015 -n missing-upgrade-863015
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p missing-upgrade-863015 -n missing-upgrade-863015: exit status 2 (254.786532ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMissingContainerUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMissingContainerUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p missing-upgrade-863015 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p missing-upgrade-863015 logs -n 25: (1.080342217s)
helpers_test.go:252: TestMissingContainerUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p running-upgrade-295219             | running-upgrade-295219    | jenkins | v1.22.0 | Mon, 17 Jul 2023 22:05:58 UTC | Mon, 17 Jul 2023 22:07:40 UTC |
	|         | --memory=2200                         |                           |         |         |                               |                               |
	|         | --vm-driver=docker                    |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| start   | -p running-upgrade-295219             | running-upgrade-295219    | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           | 17 Jul 23 22:08 UTC           |
	|         | --memory=2200                         |                           |         |         |                               |                               |
	|         | --alsologtostderr                     |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                  |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| -p      | stopped-upgrade-944884 stop           | stopped-upgrade-944884    | jenkins | v1.22.0 | Mon, 17 Jul 2023 22:07:29 UTC | Mon, 17 Jul 2023 22:07:42 UTC |
	| start   | -p stopped-upgrade-944884             | stopped-upgrade-944884    | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           | 17 Jul 23 22:08 UTC           |
	|         | --memory=2200                         |                           |         |         |                               |                               |
	|         | --alsologtostderr                     |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                  |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| ssh     | -p NoKubernetes-940248 sudo           | NoKubernetes-940248       | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           |                               |
	|         | systemctl is-active --quiet           |                           |         |         |                               |                               |
	|         | service kubelet                       |                           |         |         |                               |                               |
	| stop    | -p NoKubernetes-940248                | NoKubernetes-940248       | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           | 17 Jul 23 22:07 UTC           |
	| start   | -p NoKubernetes-940248                | NoKubernetes-940248       | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           | 17 Jul 23 22:07 UTC           |
	|         | --driver=docker                       |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| ssh     | -p NoKubernetes-940248 sudo           | NoKubernetes-940248       | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           |                               |
	|         | systemctl is-active --quiet           |                           |         |         |                               |                               |
	|         | service kubelet                       |                           |         |         |                               |                               |
	| delete  | -p NoKubernetes-940248                | NoKubernetes-940248       | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           | 17 Jul 23 22:07 UTC           |
	| start   | -p cert-expiration-413031             | cert-expiration-413031    | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC           | 17 Jul 23 22:08 UTC           |
	|         | --memory=2048                         |                           |         |         |                               |                               |
	|         | --cert-expiration=3m                  |                           |         |         |                               |                               |
	|         | --driver=docker                       |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| delete  | -p running-upgrade-295219             | running-upgrade-295219    | jenkins | v1.31.0 | 17 Jul 23 22:08 UTC           | 17 Jul 23 22:08 UTC           |
	| delete  | -p stopped-upgrade-944884             | stopped-upgrade-944884    | jenkins | v1.31.0 | 17 Jul 23 22:08 UTC           | 17 Jul 23 22:08 UTC           |
	| start   | -p kubernetes-upgrade-965748          | kubernetes-upgrade-965748 | jenkins | v1.31.0 | 17 Jul 23 22:08 UTC           | 17 Jul 23 22:09 UTC           |
	|         | --memory=2200                         |                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                               |                               |
	|         | --alsologtostderr                     |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                  |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| start   | -p cert-options-030669                | cert-options-030669       | jenkins | v1.31.0 | 17 Jul 23 22:08 UTC           | 17 Jul 23 22:09 UTC           |
	|         | --memory=2048                         |                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                               |                               |
	|         | --apiserver-names=localhost           |                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com      |                           |         |         |                               |                               |
	|         | --apiserver-port=8555                 |                           |         |         |                               |                               |
	|         | --driver=docker                       |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| start   | -p missing-upgrade-863015             | missing-upgrade-863015    | jenkins | v1.22.0 | Mon, 17 Jul 2023 22:07:41 UTC | Mon, 17 Jul 2023 22:09:01 UTC |
	|         | --memory=2200 --driver=docker         |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| ssh     | cert-options-030669 ssh               | cert-options-030669       | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           | 17 Jul 23 22:09 UTC           |
	|         | openssl x509 -text -noout -in         |                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                               |                               |
	| ssh     | -p cert-options-030669 -- sudo        | cert-options-030669       | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           | 17 Jul 23 22:09 UTC           |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                               |                               |
	| delete  | -p cert-options-030669                | cert-options-030669       | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           | 17 Jul 23 22:09 UTC           |
	| start   | -p force-systemd-env-324457           | force-systemd-env-324457  | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           | 17 Jul 23 22:09 UTC           |
	|         | --memory=2048                         |                           |         |         |                               |                               |
	|         | --alsologtostderr                     |                           |         |         |                               |                               |
	|         | -v=5 --driver=docker                  |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| start   | -p missing-upgrade-863015             | missing-upgrade-863015    | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           |                               |
	|         | --memory=2200                         |                           |         |         |                               |                               |
	|         | --alsologtostderr                     |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                  |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| stop    | -p kubernetes-upgrade-965748          | kubernetes-upgrade-965748 | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           | 17 Jul 23 22:09 UTC           |
	| start   | -p kubernetes-upgrade-965748          | kubernetes-upgrade-965748 | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           |                               |
	|         | --memory=2200                         |                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.27.3          |                           |         |         |                               |                               |
	|         | --alsologtostderr                     |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                  |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	| ssh     | force-systemd-env-324457              | force-systemd-env-324457  | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           | 17 Jul 23 22:09 UTC           |
	|         | ssh cat                               |                           |         |         |                               |                               |
	|         | /etc/containerd/config.toml           |                           |         |         |                               |                               |
	| delete  | -p force-systemd-env-324457           | force-systemd-env-324457  | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           | 17 Jul 23 22:09 UTC           |
	| start   | -p auto-871101 --memory=3072          | auto-871101               | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC           |                               |
	|         | --alsologtostderr --wait=true         |                           |         |         |                               |                               |
	|         | --wait-timeout=15m                    |                           |         |         |                               |                               |
	|         | --driver=docker                       |                           |         |         |                               |                               |
	|         | --container-runtime=containerd        |                           |         |         |                               |                               |
	|---------|---------------------------------------|---------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:09:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:09:42.778400  196961 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:09:42.778736  196961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:09:42.778749  196961 out.go:309] Setting ErrFile to fd 2...
	I0717 22:09:42.778757  196961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:09:42.779096  196961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 22:09:42.779855  196961 out.go:303] Setting JSON to false
	I0717 22:09:42.781600  196961 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3129,"bootTime":1689628654,"procs":720,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:09:42.781687  196961 start.go:138] virtualization: kvm guest
	I0717 22:09:42.783877  196961 out.go:177] * [auto-871101] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:09:42.785690  196961 notify.go:220] Checking for updates...
	I0717 22:09:42.785694  196961 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:09:42.787101  196961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:09:42.788515  196961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 22:09:42.789926  196961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 22:09:42.791260  196961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:09:42.792681  196961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:09:42.794526  196961 config.go:182] Loaded profile config "cert-expiration-413031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:09:42.794676  196961 config.go:182] Loaded profile config "kubernetes-upgrade-965748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:09:42.794811  196961 config.go:182] Loaded profile config "missing-upgrade-863015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 22:09:42.794909  196961 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:09:42.817978  196961 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:09:42.818046  196961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:09:42.877998  196961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:74 SystemTime:2023-07-17 22:09:42.868558837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:09:42.878129  196961 docker.go:294] overlay module found
	I0717 22:09:42.879892  196961 out.go:177] * Using the docker driver based on user configuration
	I0717 22:09:42.881212  196961 start.go:298] selected driver: docker
	I0717 22:09:42.881226  196961 start.go:880] validating driver "docker" against <nil>
	I0717 22:09:42.881239  196961 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:09:42.882172  196961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:09:42.934725  196961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:74 SystemTime:2023-07-17 22:09:42.925726608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:09:42.934896  196961 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 22:09:42.935188  196961 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:09:42.936831  196961 out.go:177] * Using Docker driver with root privileges
	I0717 22:09:42.938193  196961 cni.go:84] Creating CNI manager for ""
	I0717 22:09:42.938225  196961 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 22:09:42.938238  196961 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 22:09:42.938255  196961 start_flags.go:319] config:
	{Name:auto-871101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-871101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:09:42.939946  196961 out.go:177] * Starting control plane node auto-871101 in cluster auto-871101
	I0717 22:09:42.941151  196961 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 22:09:42.942309  196961 out.go:177] * Pulling base image ...
	I0717 22:09:42.943485  196961 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 22:09:42.943516  196961 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0717 22:09:42.943522  196961 cache.go:57] Caching tarball of preloaded images
	I0717 22:09:42.943579  196961 preload.go:174] Found /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 22:09:42.943590  196961 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0717 22:09:42.943579  196961 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:09:42.943683  196961 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/config.json ...
	I0717 22:09:42.943696  196961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/config.json: {Name:mk49baca27d129232c82189d21854aad15782b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:42.961060  196961 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 22:09:42.961078  196961 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 22:09:42.961094  196961 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:09:42.961121  196961 start.go:365] acquiring machines lock for auto-871101: {Name:mkdafe714b58f9152e24c1270098de81bf871a61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:09:42.961217  196961 start.go:369] acquired machines lock for "auto-871101" in 75.913µs
	I0717 22:09:42.961243  196961 start.go:93] Provisioning new machine with config: &{Name:auto-871101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-871101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 22:09:42.961325  196961 start.go:125] createHost starting for "" (driver="docker")
	I0717 22:09:40.364619  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.27.3"
	I0717 22:09:40.428213  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 22:09:40.517194  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9"
	I0717 22:09:40.521157  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.7-0"
	I0717 22:09:40.521562  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 22:09:40.525738  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 22:09:40.530582  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 22:09:41.052574  193237 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 22:09:41.052642  193237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:09:41.052746  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:41.146483  193237 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 22:09:41.146618  193237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:09:41.146691  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:41.347991  193237 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0717 22:09:41.348039  193237 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0717 22:09:41.348079  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:41.355249  193237 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 22:09:41.355292  193237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:09:41.355330  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:41.356937  193237 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 22:09:41.356974  193237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:09:41.357023  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:41.367229  193237 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 22:09:41.367266  193237 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:09:41.367300  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:41.367303  193237 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 22:09:41.367342  193237 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:09:41.367351  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:09:41.367378  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:41.367389  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:09:41.367437  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0717 22:09:41.367459  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:09:41.367504  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:09:41.431247  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 22:09:41.431699  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:09:42.082033  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 22:09:42.330885  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0717 22:09:42.330911  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 22:09:42.330984  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 22:09:42.331010  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 22:09:42.331061  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 22:09:42.331395  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 22:09:42.331417  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 22:09:42.331434  193237 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 22:09:42.331466  193237 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:09:42.331523  193237 ssh_runner.go:195] Run: which crictl
	I0717 22:09:42.334834  193237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:09:42.407521  193237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 22:09:42.407628  193237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:09:42.410835  193237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 22:09:42.410853  193237 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:09:42.410897  193237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:09:42.810045  193237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 22:09:42.810095  193237 cache_images.go:92] LoadImages completed in 2.647757513s
	W0717 22:09:42.810196  193237 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-6342/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9: no such file or directory
	I0717 22:09:42.810249  193237 ssh_runner.go:195] Run: sudo crictl info
	I0717 22:09:42.848446  193237 cni.go:84] Creating CNI manager for ""
	I0717 22:09:42.848473  193237 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 22:09:42.848491  193237 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:09:42.848519  193237 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-965748 NodeName:kubernetes-upgrade-965748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:09:42.848679  193237 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-965748"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:09:42.848759  193237 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-965748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-965748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:09:42.848816  193237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:09:42.860119  193237 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:09:42.860194  193237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:09:42.869135  193237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (398 bytes)
	I0717 22:09:42.885848  193237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:09:42.902509  193237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0717 22:09:42.919498  193237 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0717 22:09:42.923721  193237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:09:42.933442  193237 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748 for IP: 192.168.103.2
	I0717 22:09:42.933471  193237 certs.go:190] acquiring lock for shared ca certs: {Name:mk55d4c61e71de076f17ec844eb5cb8d7320ed01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:42.933615  193237 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-6342/.minikube/ca.key
	I0717 22:09:42.933666  193237 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.key
	I0717 22:09:42.933732  193237 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/client.key
	I0717 22:09:42.933786  193237 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/apiserver.key.33fce0b9
	I0717 22:09:42.933823  193237 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/proxy-client.key
	I0717 22:09:42.933924  193237 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/13163.pem (1338 bytes)
	W0717 22:09:42.933950  193237 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/13163_empty.pem, impossibly tiny 0 bytes
	I0717 22:09:42.933960  193237 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 22:09:42.933984  193237 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem (1082 bytes)
	I0717 22:09:42.934010  193237 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:09:42.934029  193237 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem (1675 bytes)
	I0717 22:09:42.934068  193237 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem (1708 bytes)
	I0717 22:09:42.934588  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:09:42.957392  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 22:09:42.979831  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:09:43.000776  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:09:43.022672  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:09:43.045390  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 22:09:43.067342  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:09:43.089056  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 22:09:43.110678  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem --> /usr/share/ca-certificates/131632.pem (1708 bytes)
	I0717 22:09:43.134069  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:09:43.168893  193237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/certs/13163.pem --> /usr/share/ca-certificates/13163.pem (1338 bytes)
	I0717 22:09:43.191409  193237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:09:43.209037  193237 ssh_runner.go:195] Run: openssl version
	I0717 22:09:43.213968  193237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:09:43.222481  193237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:09:43.225552  193237 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:39 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:09:43.225591  193237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:09:43.231654  193237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:09:43.240493  193237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13163.pem && ln -fs /usr/share/ca-certificates/13163.pem /etc/ssl/certs/13163.pem"
	I0717 22:09:43.248754  193237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13163.pem
	I0717 22:09:43.251729  193237 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:44 /usr/share/ca-certificates/13163.pem
	I0717 22:09:43.251770  193237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13163.pem
	I0717 22:09:43.257937  193237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13163.pem /etc/ssl/certs/51391683.0"
	I0717 22:09:43.266568  193237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131632.pem && ln -fs /usr/share/ca-certificates/131632.pem /etc/ssl/certs/131632.pem"
	I0717 22:09:43.275095  193237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131632.pem
	I0717 22:09:43.278398  193237 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:44 /usr/share/ca-certificates/131632.pem
	I0717 22:09:43.278495  193237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131632.pem
	I0717 22:09:43.284807  193237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131632.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:09:43.294577  193237 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:09:43.298769  193237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:09:43.308515  193237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:09:43.317539  193237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:09:43.325178  193237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:09:43.331717  193237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:09:43.338008  193237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:09:43.344133  193237 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-965748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-965748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:09:43.344214  193237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 22:09:43.344249  193237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:09:43.382060  193237 cri.go:89] found id: ""
	I0717 22:09:43.382192  193237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:09:43.390549  193237 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:09:43.390574  193237 kubeadm.go:636] restartCluster start
	I0717 22:09:43.390633  193237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:09:43.397998  193237 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:09:43.398614  193237 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-965748" does not appear in /home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 22:09:43.398968  193237 kubeconfig.go:146] "kubernetes-upgrade-965748" context is missing from /home/jenkins/minikube-integration/16899-6342/kubeconfig - will repair!
	I0717 22:09:43.399497  193237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/kubeconfig: {Name:mk5af2760efebf8dd7d91150f7a763b04339dbdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:43.400404  193237 kapi.go:59] client config for kubernetes-upgrade-965748: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kubernetes-upgrade-965748/client.key", CAFile:"/home/jenkins/minikube-integration/16899-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:09:43.401186  193237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:09:43.409563  193237 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-07-17 22:08:52.393604940 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-07-17 22:09:42.913429268 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.103.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /run/containerd/containerd.sock
	+  criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-965748"
	   kubeletExtraArgs:
	     node-ip: 192.168.103.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-965748
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.103.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.27.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0717 22:09:43.409578  193237 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:09:43.409588  193237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0717 22:09:43.409622  193237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:09:43.444332  193237 cri.go:89] found id: ""
	I0717 22:09:43.444422  193237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:09:43.473435  193237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:09:43.481738  193237 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Jul 17 22:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Jul 17 22:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Jul 17 22:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Jul 17 22:08 /etc/kubernetes/scheduler.conf
	
	I0717 22:09:43.481821  193237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 22:09:43.490678  193237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 22:09:43.498403  193237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 22:09:43.506982  193237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 22:09:43.515081  193237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:09:43.523506  193237 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:09:43.523530  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:09:43.571458  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:09:44.851549  193237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.280033343s)
	I0717 22:09:44.851581  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:09:44.993427  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:09:45.043900  193237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:09:45.143513  193237 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:09:45.143581  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:42.963575  196961 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0717 22:09:42.963799  196961 start.go:159] libmachine.API.Create for "auto-871101" (driver="docker")
	I0717 22:09:42.963834  196961 client.go:168] LocalClient.Create starting
	I0717 22:09:42.963913  196961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem
	I0717 22:09:42.963946  196961 main.go:141] libmachine: Decoding PEM data...
	I0717 22:09:42.963958  196961 main.go:141] libmachine: Parsing certificate...
	I0717 22:09:42.964016  196961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem
	I0717 22:09:42.964033  196961 main.go:141] libmachine: Decoding PEM data...
	I0717 22:09:42.964057  196961 main.go:141] libmachine: Parsing certificate...
	I0717 22:09:42.964369  196961 cli_runner.go:164] Run: docker network inspect auto-871101 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 22:09:42.980536  196961 cli_runner.go:211] docker network inspect auto-871101 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 22:09:42.980611  196961 network_create.go:281] running [docker network inspect auto-871101] to gather additional debugging logs...
	I0717 22:09:42.980637  196961 cli_runner.go:164] Run: docker network inspect auto-871101
	W0717 22:09:42.996923  196961 cli_runner.go:211] docker network inspect auto-871101 returned with exit code 1
	I0717 22:09:42.996958  196961 network_create.go:284] error running [docker network inspect auto-871101]: docker network inspect auto-871101: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-871101 not found
	I0717 22:09:42.996975  196961 network_create.go:286] output of [docker network inspect auto-871101]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-871101 not found
	
	** /stderr **
	I0717 22:09:42.997025  196961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:09:43.014358  196961 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9e70a8cfc12f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:04:c2:84:0a} reservation:<nil>}
	I0717 22:09:43.015541  196961 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbdb6a80ee68 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:a3:4e:1e:0c} reservation:<nil>}
	I0717 22:09:43.017216  196961 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2a9f57b99f1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:2e:38:cc:d7} reservation:<nil>}
	I0717 22:09:43.018193  196961 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f54a4da66ff5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ac:1a:c8:40} reservation:<nil>}
	I0717 22:09:43.019409  196961 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-502084920c13 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:9b:59:d6:8b} reservation:<nil>}
	I0717 22:09:43.020281  196961 network.go:214] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f293eb0db0db IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:b5:6b:fa:cd} reservation:<nil>}
	I0717 22:09:43.021065  196961 network.go:214] skipping subnet 192.168.103.0/24 that is taken: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName:br-c601ed08a82e IfaceIPv4:192.168.103.1 IfaceMTU:1500 IfaceMAC:02:42:da:9d:58:d2} reservation:<nil>}
	I0717 22:09:43.021943  196961 network.go:209] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000059ea0}
	I0717 22:09:43.021962  196961 network_create.go:123] attempt to create docker network auto-871101 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I0717 22:09:43.022009  196961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-871101 auto-871101
	I0717 22:09:43.079974  196961 network_create.go:107] docker network auto-871101 192.168.112.0/24 created
	I0717 22:09:43.080010  196961 kic.go:117] calculated static IP "192.168.112.2" for the "auto-871101" container
	I0717 22:09:43.080077  196961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 22:09:43.097187  196961 cli_runner.go:164] Run: docker volume create auto-871101 --label name.minikube.sigs.k8s.io=auto-871101 --label created_by.minikube.sigs.k8s.io=true
	I0717 22:09:43.113030  196961 oci.go:103] Successfully created a docker volume auto-871101
	I0717 22:09:43.113103  196961 cli_runner.go:164] Run: docker run --rm --name auto-871101-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-871101 --entrypoint /usr/bin/test -v auto-871101:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 22:09:43.663050  196961 oci.go:107] Successfully prepared a docker volume auto-871101
	I0717 22:09:43.663080  196961 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 22:09:43.663098  196961 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 22:09:43.663182  196961 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-871101:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 22:09:45.654447  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:46.153544  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:46.654215  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:47.153509  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:47.654097  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:48.154493  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:48.654461  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:49.154194  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:49.653738  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:50.154137  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:48.630547  196961 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-871101:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.967307891s)
	I0717 22:09:48.630589  196961 kic.go:199] duration metric: took 4.967485 seconds to extract preloaded images to volume
	W0717 22:09:48.630742  196961 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 22:09:48.630878  196961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 22:09:48.691934  196961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-871101 --name auto-871101 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-871101 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-871101 --network auto-871101 --ip 192.168.112.2 --volume auto-871101:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:09:49.020011  196961 cli_runner.go:164] Run: docker container inspect auto-871101 --format={{.State.Running}}
	I0717 22:09:49.038432  196961 cli_runner.go:164] Run: docker container inspect auto-871101 --format={{.State.Status}}
	I0717 22:09:49.058801  196961 cli_runner.go:164] Run: docker exec auto-871101 stat /var/lib/dpkg/alternatives/iptables
	I0717 22:09:49.129531  196961 oci.go:144] the created container "auto-871101" has a running status.
	I0717 22:09:49.129565  196961 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-6342/.minikube/machines/auto-871101/id_rsa...
	I0717 22:09:49.417396  196961 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-6342/.minikube/machines/auto-871101/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 22:09:49.441820  196961 cli_runner.go:164] Run: docker container inspect auto-871101 --format={{.State.Status}}
	I0717 22:09:49.462460  196961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 22:09:49.462488  196961 kic_runner.go:114] Args: [docker exec --privileged auto-871101 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 22:09:49.547184  196961 cli_runner.go:164] Run: docker container inspect auto-871101 --format={{.State.Status}}
	I0717 22:09:49.566602  196961 machine.go:88] provisioning docker machine ...
	I0717 22:09:49.566644  196961 ubuntu.go:169] provisioning hostname "auto-871101"
	I0717 22:09:49.566708  196961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-871101
	I0717 22:09:49.586285  196961 main.go:141] libmachine: Using SSH client type: native
	I0717 22:09:49.588547  196961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0717 22:09:49.588577  196961 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-871101 && echo "auto-871101" | sudo tee /etc/hostname
	I0717 22:09:49.830889  196961 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-871101
	
	I0717 22:09:49.830972  196961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-871101
	I0717 22:09:49.850678  196961 main.go:141] libmachine: Using SSH client type: native
	I0717 22:09:49.851078  196961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0717 22:09:49.851098  196961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-871101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-871101/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-871101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:09:49.983880  196961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:09:49.983910  196961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-6342/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-6342/.minikube}
	I0717 22:09:49.983940  196961 ubuntu.go:177] setting up certificates
	I0717 22:09:49.983951  196961 provision.go:83] configureAuth start
	I0717 22:09:49.984012  196961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-871101
	I0717 22:09:50.002389  196961 provision.go:138] copyHostCerts
	I0717 22:09:50.002471  196961 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem, removing ...
	I0717 22:09:50.002490  196961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem
	I0717 22:09:50.002568  196961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/ca.pem (1082 bytes)
	I0717 22:09:50.002667  196961 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem, removing ...
	I0717 22:09:50.002677  196961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem
	I0717 22:09:50.002702  196961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/cert.pem (1123 bytes)
	I0717 22:09:50.002755  196961 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem, removing ...
	I0717 22:09:50.002762  196961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem
	I0717 22:09:50.002781  196961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-6342/.minikube/key.pem (1675 bytes)
	I0717 22:09:50.002826  196961 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem org=jenkins.auto-871101 san=[192.168.112.2 127.0.0.1 localhost 127.0.0.1 minikube auto-871101]
	I0717 22:09:50.117502  196961 provision.go:172] copyRemoteCerts
	I0717 22:09:50.117550  196961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:09:50.117583  196961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-871101
	I0717 22:09:50.135485  196961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/auto-871101/id_rsa Username:docker}
	I0717 22:09:50.231851  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 22:09:50.257690  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0717 22:09:50.282337  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:09:50.302914  196961 provision.go:86] duration metric: configureAuth took 318.948505ms
	I0717 22:09:50.302940  196961 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:09:50.303111  196961 config.go:182] Loaded profile config "auto-871101": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:09:50.303124  196961 machine.go:91] provisioned docker machine in 736.504195ms
	I0717 22:09:50.303129  196961 client.go:171] LocalClient.Create took 7.339284938s
	I0717 22:09:50.303156  196961 start.go:167] duration metric: libmachine.API.Create for "auto-871101" took 7.339350803s
	I0717 22:09:50.303167  196961 start.go:300] post-start starting for "auto-871101" (driver="docker")
	I0717 22:09:50.303180  196961 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:09:50.303229  196961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:09:50.303263  196961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-871101
	I0717 22:09:50.318894  196961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/auto-871101/id_rsa Username:docker}
	I0717 22:09:50.412337  196961 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:09:50.415090  196961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:09:50.415128  196961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:09:50.415150  196961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:09:50.415159  196961 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 22:09:50.415168  196961 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/addons for local assets ...
	I0717 22:09:50.415228  196961 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6342/.minikube/files for local assets ...
	I0717 22:09:50.415316  196961 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem -> 131632.pem in /etc/ssl/certs
	I0717 22:09:50.415437  196961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:09:50.422661  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem --> /etc/ssl/certs/131632.pem (1708 bytes)
	I0717 22:09:50.445063  196961 start.go:303] post-start completed in 141.881825ms
	I0717 22:09:50.445462  196961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-871101
	I0717 22:09:50.464989  196961 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/config.json ...
	I0717 22:09:50.465239  196961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:09:50.465290  196961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-871101
	I0717 22:09:50.484509  196961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/auto-871101/id_rsa Username:docker}
	I0717 22:09:50.573375  196961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:09:50.578178  196961 start.go:128] duration metric: createHost completed in 7.616840803s
	I0717 22:09:50.578200  196961 start.go:83] releasing machines lock for "auto-871101", held for 7.616970188s
	I0717 22:09:50.578281  196961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-871101
	I0717 22:09:50.596544  196961 ssh_runner.go:195] Run: cat /version.json
	I0717 22:09:50.596607  196961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-871101
	I0717 22:09:50.596652  196961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:09:50.596701  196961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-871101
	I0717 22:09:50.613090  196961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/auto-871101/id_rsa Username:docker}
	I0717 22:09:50.614458  196961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/auto-871101/id_rsa Username:docker}
	I0717 22:09:50.810949  196961 ssh_runner.go:195] Run: systemctl --version
	I0717 22:09:50.815377  196961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:09:50.819938  196961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 22:09:50.843199  196961 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:09:50.843268  196961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:09:50.871415  196961 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 22:09:50.871436  196961 start.go:466] detecting cgroup driver to use...
	I0717 22:09:50.871470  196961 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:09:50.871527  196961 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 22:09:50.884101  196961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 22:09:50.896329  196961 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:09:50.896388  196961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:09:50.911342  196961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:09:50.924908  196961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:09:51.017580  196961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:09:51.118532  196961 docker.go:212] disabling docker service ...
	I0717 22:09:51.118599  196961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:09:51.138320  196961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:09:51.150761  196961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:09:51.229889  196961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:09:51.327250  196961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:09:51.336965  196961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:09:51.350536  196961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 22:09:51.358720  196961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 22:09:51.368101  196961 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 22:09:51.368163  196961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 22:09:51.376359  196961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:09:51.384650  196961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 22:09:51.393179  196961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:09:51.401454  196961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:09:51.409010  196961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 22:09:51.417557  196961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:09:51.424511  196961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:09:51.432005  196961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:09:51.506767  196961 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 22:09:51.573415  196961 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 22:09:51.573489  196961 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 22:09:51.577675  196961 start.go:534] Will wait 60s for crictl version
	I0717 22:09:51.577717  196961 ssh_runner.go:195] Run: which crictl
	I0717 22:09:51.581162  196961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:09:51.621575  196961 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0717 22:09:51.621633  196961 ssh_runner.go:195] Run: containerd --version
	I0717 22:09:51.649620  196961 ssh_runner.go:195] Run: containerd --version
	I0717 22:09:51.680672  196961 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0717 22:09:51.839928  190811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:09:51.870016  190811 retry.go:31] will retry after 18.919986612s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:09:51Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 22:09:51.681971  196961 cli_runner.go:164] Run: docker network inspect auto-871101 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:09:51.700533  196961 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0717 22:09:51.704269  196961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:09:51.715993  196961 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 22:09:51.716049  196961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:09:51.752212  196961 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 22:09:51.752235  196961 containerd.go:518] Images already preloaded, skipping extraction
	I0717 22:09:51.752287  196961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:09:51.787556  196961 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 22:09:51.787577  196961 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:09:51.787625  196961 ssh_runner.go:195] Run: sudo crictl info
	I0717 22:09:51.826096  196961 cni.go:84] Creating CNI manager for ""
	I0717 22:09:51.826122  196961 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 22:09:51.826135  196961 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:09:51.826155  196961 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-871101 NodeName:auto-871101 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:09:51.826320  196961 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-871101"
	  kubeletExtraArgs:
	    node-ip: 192.168.112.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:09:51.826390  196961 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=auto-871101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:auto-871101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:09:51.826432  196961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:09:51.834950  196961 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:09:51.835033  196961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:09:51.843340  196961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (384 bytes)
	I0717 22:09:51.862703  196961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:09:51.882033  196961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0717 22:09:51.899447  196961 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I0717 22:09:51.902900  196961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:09:51.914725  196961 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101 for IP: 192.168.112.2
	I0717 22:09:51.914756  196961 certs.go:190] acquiring lock for shared ca certs: {Name:mk55d4c61e71de076f17ec844eb5cb8d7320ed01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:51.914917  196961 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-6342/.minikube/ca.key
	I0717 22:09:51.914960  196961 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.key
	I0717 22:09:51.915003  196961 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.key
	I0717 22:09:51.915021  196961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt with IP's: []
	I0717 22:09:52.023163  196961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt ...
	I0717 22:09:52.023190  196961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: {Name:mkd98930c54b8e2db1ba5f2adae0fadf7c29f565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:52.023331  196961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.key ...
	I0717 22:09:52.023343  196961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.key: {Name:mk6490c67d874cea0fb3a36138b95a8bc71e03c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:52.023414  196961 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.key.9c554139
	I0717 22:09:52.023427  196961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.crt.9c554139 with IP's: [192.168.112.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:09:52.397306  196961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.crt.9c554139 ...
	I0717 22:09:52.397330  196961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.crt.9c554139: {Name:mk745f3a03a45f2ffe02160ffd31a8b6a4694c68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:52.397487  196961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.key.9c554139 ...
	I0717 22:09:52.397510  196961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.key.9c554139: {Name:mkc78aa35214680f76ba1963c1f927d2cf27244d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:52.397618  196961 certs.go:337] copying /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.crt.9c554139 -> /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.crt
	I0717 22:09:52.397704  196961 certs.go:341] copying /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.key.9c554139 -> /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.key
	I0717 22:09:52.397782  196961 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.key
	I0717 22:09:52.397797  196961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.crt with IP's: []
	I0717 22:09:52.534644  196961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.crt ...
	I0717 22:09:52.534670  196961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.crt: {Name:mkc31ee0756146d6d5461b76070c48ed5525cae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:52.534812  196961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.key ...
	I0717 22:09:52.534823  196961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.key: {Name:mk635e2c71f7835c2d3503fe5bb719b04baf41dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:09:52.534987  196961 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/13163.pem (1338 bytes)
	W0717 22:09:52.535022  196961 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/13163_empty.pem, impossibly tiny 0 bytes
	I0717 22:09:52.535031  196961 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 22:09:52.535056  196961 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/ca.pem (1082 bytes)
	I0717 22:09:52.535080  196961 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:09:52.535100  196961 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/certs/home/jenkins/minikube-integration/16899-6342/.minikube/certs/key.pem (1675 bytes)
	I0717 22:09:52.535136  196961 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem (1708 bytes)
	I0717 22:09:52.535669  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:09:52.557374  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:09:52.577085  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:09:52.596647  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:09:52.618433  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:09:52.638126  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 22:09:52.657690  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:09:52.677469  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 22:09:52.696844  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/certs/13163.pem --> /usr/share/ca-certificates/13163.pem (1338 bytes)
	I0717 22:09:52.716439  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/ssl/certs/131632.pem --> /usr/share/ca-certificates/131632.pem (1708 bytes)
	I0717 22:09:52.736517  196961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6342/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:09:52.756317  196961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:09:52.770980  196961 ssh_runner.go:195] Run: openssl version
	I0717 22:09:52.775477  196961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131632.pem && ln -fs /usr/share/ca-certificates/131632.pem /etc/ssl/certs/131632.pem"
	I0717 22:09:50.654230  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:51.153728  193237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:51.165824  193237 api_server.go:72] duration metric: took 6.022312674s to wait for apiserver process to appear ...
	I0717 22:09:51.165846  193237 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:09:51.165861  193237 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0717 22:09:51.166195  193237 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I0717 22:09:51.666854  193237 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0717 22:09:52.783331  196961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131632.pem
	I0717 22:09:52.786125  196961 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:44 /usr/share/ca-certificates/131632.pem
	I0717 22:09:52.786168  196961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131632.pem
	I0717 22:09:52.791968  196961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131632.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:09:52.799540  196961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:09:52.807366  196961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:09:52.810175  196961 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:39 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:09:52.810216  196961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:09:52.815914  196961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:09:52.823333  196961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13163.pem && ln -fs /usr/share/ca-certificates/13163.pem /etc/ssl/certs/13163.pem"
	I0717 22:09:52.831303  196961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13163.pem
	I0717 22:09:52.834190  196961 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:44 /usr/share/ca-certificates/13163.pem
	I0717 22:09:52.834225  196961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13163.pem
	I0717 22:09:52.841073  196961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13163.pem /etc/ssl/certs/51391683.0"
	I0717 22:09:52.849288  196961 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:09:52.852100  196961 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:09:52.852147  196961 kubeadm.go:404] StartCluster: {Name:auto-871101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-871101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:09:52.852239  196961 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 22:09:52.852281  196961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:09:52.884183  196961 cri.go:89] found id: ""
	I0717 22:09:52.884238  196961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:09:52.891946  196961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:09:52.899481  196961 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 22:09:52.899533  196961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:09:52.907208  196961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:09:52.907247  196961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 22:09:52.988922  196961 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 22:09:53.052867  196961 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:09:56.667487  193237 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 22:09:56.667554  193237 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0717 22:10:01.650003  196961 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:10:01.650095  196961 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:10:01.650189  196961 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 22:10:01.650275  196961 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 22:10:01.650343  196961 kubeadm.go:322] OS: Linux
	I0717 22:10:01.650410  196961 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 22:10:01.650472  196961 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 22:10:01.650542  196961 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 22:10:01.650605  196961 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 22:10:01.650679  196961 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 22:10:01.650752  196961 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 22:10:01.650850  196961 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 22:10:01.650942  196961 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 22:10:01.651010  196961 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 22:10:01.651121  196961 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:10:01.651252  196961 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:10:01.651375  196961 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:10:01.651461  196961 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:10:01.652940  196961 out.go:204]   - Generating certificates and keys ...
	I0717 22:10:01.653036  196961 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:10:01.653138  196961 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:10:01.653254  196961 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:10:01.653334  196961 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:10:01.653419  196961 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 22:10:01.653479  196961 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 22:10:01.653552  196961 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 22:10:01.653698  196961 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-871101 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I0717 22:10:01.653764  196961 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 22:10:01.653919  196961 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-871101 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I0717 22:10:01.653998  196961 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:10:01.654082  196961 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:10:01.654138  196961 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 22:10:01.654214  196961 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:10:01.654258  196961 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:10:01.654304  196961 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:10:01.654379  196961 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:10:01.654446  196961 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:10:01.654530  196961 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:10:01.654599  196961 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:10:01.654632  196961 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:10:01.654686  196961 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:10:01.656200  196961 out.go:204]   - Booting up control plane ...
	I0717 22:10:01.656304  196961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:10:01.656392  196961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:10:01.656481  196961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:10:01.656602  196961 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:10:01.656780  196961 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:10:01.656897  196961 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001919 seconds
	I0717 22:10:01.657064  196961 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:10:01.657241  196961 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:10:01.657335  196961 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:10:01.657556  196961 kubeadm.go:322] [mark-control-plane] Marking the node auto-871101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:10:01.657643  196961 kubeadm.go:322] [bootstrap-token] Using token: nsw0s3.bmq8m4epsjoclxwa
	I0717 22:10:01.658955  196961 out.go:204]   - Configuring RBAC rules ...
	I0717 22:10:01.659065  196961 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:10:01.659138  196961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:10:01.659311  196961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:10:01.659424  196961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:10:01.659524  196961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:10:01.659621  196961 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:10:01.659761  196961 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:10:01.659842  196961 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:10:01.659884  196961 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:10:01.659893  196961 kubeadm.go:322] 
	I0717 22:10:01.659941  196961 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:10:01.659947  196961 kubeadm.go:322] 
	I0717 22:10:01.660017  196961 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:10:01.660028  196961 kubeadm.go:322] 
	I0717 22:10:01.660055  196961 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:10:01.660104  196961 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:10:01.660149  196961 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:10:01.660155  196961 kubeadm.go:322] 
	I0717 22:10:01.660202  196961 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:10:01.660214  196961 kubeadm.go:322] 
	I0717 22:10:01.660255  196961 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:10:01.660261  196961 kubeadm.go:322] 
	I0717 22:10:01.660302  196961 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:10:01.660395  196961 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:10:01.660459  196961 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:10:01.660465  196961 kubeadm.go:322] 
	I0717 22:10:01.660578  196961 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:10:01.660681  196961 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:10:01.660691  196961 kubeadm.go:322] 
	I0717 22:10:01.660807  196961 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nsw0s3.bmq8m4epsjoclxwa \
	I0717 22:10:01.660963  196961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e689e19472dae50aa2cf886e41f52c0fd80f34a719435911b1bb4ef0d359ff11 \
	I0717 22:10:01.661009  196961 kubeadm.go:322] 	--control-plane 
	I0717 22:10:01.661018  196961 kubeadm.go:322] 
	I0717 22:10:01.661149  196961 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:10:01.661169  196961 kubeadm.go:322] 
	I0717 22:10:01.661238  196961 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nsw0s3.bmq8m4epsjoclxwa \
	I0717 22:10:01.661346  196961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e689e19472dae50aa2cf886e41f52c0fd80f34a719435911b1bb4ef0d359ff11 
	I0717 22:10:01.661355  196961 cni.go:84] Creating CNI manager for ""
	I0717 22:10:01.661363  196961 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 22:10:01.662941  196961 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 22:10:01.664377  196961 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:10:01.668067  196961 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:10:01.668088  196961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:10:01.684193  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:10:02.407490  196961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:10:02.407596  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:02.407597  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=auto-871101 minikube.k8s.io/updated_at=2023_07_17T22_10_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:02.480467  196961 ops.go:34] apiserver oom_adj: -16
	I0717 22:10:02.480542  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:01.668629  193237 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 22:10:01.668666  193237 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0717 22:10:03.066196  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:03.565798  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:04.066094  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:04.565632  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:05.065911  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:05.566084  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:06.065331  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:06.566069  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:07.065286  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:07.565334  196961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:10:06.669160  193237 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 22:10:06.669205  193237 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0717 22:10:10.791911  190811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:10:10.816516  190811 out.go:177] 
	W0717 22:10:10.817792  190811 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0717 22:10:10.817806  190811 out.go:239] * 
	W0717 22:10:10.818547  190811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:10:10.820135  190811 out.go:177] 
	
	* 
	* ==> container status <==
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2023-07-17 22:09:34 UTC, end at Mon 2023-07-17 22:10:11 UTC. --
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.817614938Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.817627405Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.817638901Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.817649598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.817682487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.817730544Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818078708Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818109958Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818165251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818177836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818188544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818198594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818208459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818219681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818230244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818240461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818250486Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818283003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818295741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818307363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818316991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818514852Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818548663Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jul 17 22:09:36 missing-upgrade-863015 containerd[638]: time="2023-07-17T22:09:36.818588411Z" level=info msg="containerd successfully booted in 0.027887s"
	Jul 17 22:09:36 missing-upgrade-863015 systemd[1]: Started containerd container runtime.
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[  +4.059717] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-cbdb6a80ee68
	[  +0.000021] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[  +8.191349] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-cbdb6a80ee68
	[  +0.000025] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[Jul17 21:59] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-cbdb6a80ee68
	[  +0.000007] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[  +1.035603] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-cbdb6a80ee68
	[  +0.000008] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[  +2.011841] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-cbdb6a80ee68
	[  +0.000007] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[  +4.031692] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-cbdb6a80ee68
	[  +0.000026] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[  +8.191329] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-cbdb6a80ee68
	[  +0.000025] ll header: 00000000: 02 42 a3 4e 1e 0c 02 42 c0 a8 3a 02 08 00
	[Jul17 22:03] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-828843de2888
	[  +0.000007] ll header: 00000000: 02 42 15 74 00 dc 02 42 c0 a8 43 02 08 00
	[  +1.028450] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-828843de2888
	[  +0.000022] ll header: 00000000: 02 42 15 74 00 dc 02 42 c0 a8 43 02 08 00
	[  +2.015814] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-828843de2888
	[  +0.000022] ll header: 00000000: 02 42 15 74 00 dc 02 42 c0 a8 43 02 08 00
	[  +4.159675] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-828843de2888
	[  +0.000005] ll header: 00000000: 02 42 15 74 00 dc 02 42 c0 a8 43 02 08 00
	[  +8.191399] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-828843de2888
	[  +0.000006] ll header: 00000000: 02 42 15 74 00 dc 02 42 c0 a8 43 02 08 00
	
	* 
	* ==> kernel <==
	*  22:10:12 up 52 min,  0 users,  load average: 5.03, 3.73, 2.01
	Linux missing-upgrade-863015 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2023-07-17 22:09:34 UTC, end at Mon 2023-07-17 22:10:12 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:10:11.391156  200355 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.413290  200355 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.435135  200355 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.460147  200355 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.482391  200355 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.503968  200355 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.525406  200355 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.548314  200355 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 22:10:11.636202  200355 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T22:10:11Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-07-17T22:10:11Z\" level=fatal msg=\"listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0717 22:10:12.148406  200355 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p missing-upgrade-863015 -n missing-upgrade-863015
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p missing-upgrade-863015 -n missing-upgrade-863015: exit status 2 (275.98214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "missing-upgrade-863015" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-863015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-863015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-863015: (2.373857856s)
--- FAIL: TestMissingContainerUpgrade (156.41s)

                                                
                                    

Test pass (278/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 22.26
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.27.3/json-events 22.28
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
18 TestDownloadOnlyKic 1.13
19 TestBinaryMirror 0.67
20 TestOffline 59.3
22 TestAddons/Setup 124.22
24 TestAddons/parallel/Registry 23.73
25 TestAddons/parallel/Ingress 21.92
26 TestAddons/parallel/InspektorGadget 10.74
27 TestAddons/parallel/MetricsServer 6.17
28 TestAddons/parallel/HelmTiller 11.9
30 TestAddons/parallel/CSI 102.29
31 TestAddons/parallel/Headlamp 18.66
32 TestAddons/parallel/CloudSpanner 5.63
35 TestAddons/serial/GCPAuth/Namespaces 0.12
36 TestAddons/StoppedEnableDisable 12.09
37 TestCertOptions 34.41
38 TestCertExpiration 220.43
40 TestForceSystemdFlag 32.72
41 TestForceSystemdEnv 31.61
43 TestKVMDriverInstallOrUpdate 8.88
47 TestErrorSpam/setup 23.66
48 TestErrorSpam/start 0.55
49 TestErrorSpam/status 0.81
50 TestErrorSpam/pause 1.42
51 TestErrorSpam/unpause 1.41
52 TestErrorSpam/stop 1.34
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 48.64
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 11.06
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.88
64 TestFunctional/serial/CacheCmd/cache/add_local 2.83
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 53.36
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.25
75 TestFunctional/serial/LogsFileCmd 1.27
76 TestFunctional/serial/InvalidService 4.37
78 TestFunctional/parallel/ConfigCmd 0.32
79 TestFunctional/parallel/DashboardCmd 10.62
80 TestFunctional/parallel/DryRun 0.31
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 1.05
86 TestFunctional/parallel/ServiceCmdConnect 7.58
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 38.87
90 TestFunctional/parallel/SSHCmd 0.58
91 TestFunctional/parallel/CpCmd 1.21
92 TestFunctional/parallel/MySQL 30.83
93 TestFunctional/parallel/FileSync 0.25
94 TestFunctional/parallel/CertSync 1.48
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
103 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.35
109 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/Version/components 0.44
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
115 TestFunctional/parallel/ImageCommands/ImageBuild 4.87
116 TestFunctional/parallel/ImageCommands/Setup 2.62
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.76
118 TestFunctional/parallel/ServiceCmd/List 0.31
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.96
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
122 TestFunctional/parallel/ServiceCmd/Format 0.32
123 TestFunctional/parallel/ServiceCmd/URL 0.35
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.82
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
132 TestFunctional/parallel/ProfileCmd/profile_list 0.29
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
134 TestFunctional/parallel/MountCmd/any-port 10.06
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.72
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.05
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.78
142 TestFunctional/parallel/MountCmd/specific-port 1.98
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
144 TestFunctional/delete_addon-resizer_images 0.06
145 TestFunctional/delete_my-image_image 0.01
146 TestFunctional/delete_minikube_cached_images 0.01
150 TestIngressAddonLegacy/StartLegacyK8sCluster 83.49
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.77
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.5
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.56
157 TestJSONOutput/start/Command 50.18
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.62
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.55
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.66
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.18
182 TestKicCustomNetwork/create_custom_network 41.91
183 TestKicCustomNetwork/use_default_bridge_network 26.66
184 TestKicExistingNetwork 26.74
185 TestKicCustomSubnet 24.19
186 TestKicStaticIP 23.58
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 48.22
191 TestMountStart/serial/StartWithMountFirst 5.19
192 TestMountStart/serial/VerifyMountFirst 0.22
193 TestMountStart/serial/StartWithMountSecond 5.07
194 TestMountStart/serial/VerifyMountSecond 0.22
195 TestMountStart/serial/DeleteFirst 1.58
196 TestMountStart/serial/VerifyMountPostDelete 0.23
197 TestMountStart/serial/Stop 1.18
198 TestMountStart/serial/RestartStopped 7.03
199 TestMountStart/serial/VerifyMountPostStop 0.23
202 TestMultiNode/serial/FreshStart2Nodes 62.89
203 TestMultiNode/serial/DeployApp2Nodes 5.89
204 TestMultiNode/serial/PingHostFrom2Pods 0.77
205 TestMultiNode/serial/AddNode 18.36
206 TestMultiNode/serial/ProfileList 0.26
207 TestMultiNode/serial/CopyFile 8.48
208 TestMultiNode/serial/StopNode 2.03
209 TestMultiNode/serial/StartAfterStop 10.41
210 TestMultiNode/serial/RestartKeepsNodes 120.5
211 TestMultiNode/serial/DeleteNode 4.55
212 TestMultiNode/serial/StopMultiNode 23.79
213 TestMultiNode/serial/RestartMultiNode 84.66
214 TestMultiNode/serial/ValidateNameConflict 25.07
219 TestPreload 174.36
221 TestScheduledStopUnix 97.84
224 TestInsufficientStorage 9.47
225 TestRunningBinaryUpgrade 159.51
227 TestKubernetesUpgrade 373.03
229 TestStoppedBinaryUpgrade/Setup 3.07
233 TestStoppedBinaryUpgrade/Upgrade 167.47
238 TestNetworkPlugins/group/false 6.92
250 TestPause/serial/Start 56.76
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
253 TestNoKubernetes/serial/StartWithK8s 28.63
254 TestNoKubernetes/serial/StartWithStopK8s 26.66
255 TestPause/serial/SecondStartNoReconfiguration 12.32
256 TestPause/serial/Pause 0.64
257 TestPause/serial/VerifyStatus 0.31
258 TestPause/serial/Unpause 0.62
259 TestPause/serial/PauseAgain 0.68
260 TestPause/serial/DeletePaused 2.77
261 TestPause/serial/VerifyDeletedResources 0.74
262 TestNoKubernetes/serial/Start 6.79
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
264 TestNoKubernetes/serial/ProfileList 0.89
265 TestNoKubernetes/serial/Stop 1.2
266 TestNoKubernetes/serial/StartNoArgs 5.96
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.68
269 TestNetworkPlugins/group/auto/Start 49.72
270 TestNetworkPlugins/group/kindnet/Start 48.78
271 TestNetworkPlugins/group/auto/KubeletFlags 0.29
272 TestNetworkPlugins/group/auto/NetCatPod 9.33
273 TestNetworkPlugins/group/auto/DNS 0.14
274 TestNetworkPlugins/group/auto/Localhost 0.12
275 TestNetworkPlugins/group/auto/HairPin 0.12
276 TestNetworkPlugins/group/calico/Start 67.87
277 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
278 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
279 TestNetworkPlugins/group/kindnet/NetCatPod 9.3
280 TestNetworkPlugins/group/kindnet/DNS 0.16
281 TestNetworkPlugins/group/kindnet/Localhost 0.15
282 TestNetworkPlugins/group/kindnet/HairPin 0.15
283 TestNetworkPlugins/group/custom-flannel/Start 47.61
284 TestNetworkPlugins/group/enable-default-cni/Start 74.37
285 TestNetworkPlugins/group/calico/ControllerPod 5.02
286 TestNetworkPlugins/group/calico/KubeletFlags 0.35
287 TestNetworkPlugins/group/calico/NetCatPod 8.48
288 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
289 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.34
290 TestNetworkPlugins/group/calico/DNS 0.15
291 TestNetworkPlugins/group/calico/Localhost 0.14
292 TestNetworkPlugins/group/calico/HairPin 0.13
293 TestNetworkPlugins/group/custom-flannel/DNS 0.15
294 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
295 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
296 TestNetworkPlugins/group/flannel/Start 53.17
297 TestNetworkPlugins/group/bridge/Start 37.48
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.96
300 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
304 TestStartStop/group/old-k8s-version/serial/FirstStart 132.96
305 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
306 TestNetworkPlugins/group/bridge/NetCatPod 8.5
307 TestNetworkPlugins/group/flannel/ControllerPod 5.02
308 TestNetworkPlugins/group/bridge/DNS 0.18
309 TestNetworkPlugins/group/bridge/Localhost 0.16
310 TestNetworkPlugins/group/bridge/HairPin 0.15
311 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
312 TestNetworkPlugins/group/flannel/NetCatPod 9.34
313 TestNetworkPlugins/group/flannel/DNS 0.19
314 TestNetworkPlugins/group/flannel/Localhost 0.15
315 TestNetworkPlugins/group/flannel/HairPin 0.14
317 TestStartStop/group/no-preload/serial/FirstStart 71.8
319 TestStartStop/group/embed-certs/serial/FirstStart 79.66
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.72
322 TestStartStop/group/no-preload/serial/DeployApp 10.47
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
324 TestStartStop/group/no-preload/serial/Stop 11.83
325 TestStartStop/group/embed-certs/serial/DeployApp 11.33
326 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
327 TestStartStop/group/no-preload/serial/SecondStart 317.73
328 TestStartStop/group/old-k8s-version/serial/DeployApp 10.32
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
331 TestStartStop/group/embed-certs/serial/Stop 11.88
332 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.73
333 TestStartStop/group/old-k8s-version/serial/Stop 11.99
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.85
336 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
337 TestStartStop/group/embed-certs/serial/SecondStart 333.05
338 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/old-k8s-version/serial/SecondStart 668.73
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 593.95
342 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
344 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
345 TestStartStop/group/no-preload/serial/Pause 2.65
347 TestStartStop/group/newest-cni/serial/FirstStart 33.57
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.02
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
350 TestStartStop/group/newest-cni/serial/DeployApp 0
351 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.23
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
353 TestStartStop/group/embed-certs/serial/Pause 3.28
354 TestStartStop/group/newest-cni/serial/Stop 1.22
355 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/newest-cni/serial/SecondStart 40.35
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
360 TestStartStop/group/newest-cni/serial/Pause 2.75
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.45
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
368 TestStartStop/group/old-k8s-version/serial/Pause 2.37
x
+
TestDownloadOnly/v1.16.0/json-events (22.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-997965 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-997965 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (22.261036628s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (22.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-997965
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-997965: exit status 85 (51.722018ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-997965 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC |          |
	|         | -p download-only-997965        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:38:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:38:19.690016   13175 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:38:19.690131   13175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:19.690136   13175 out.go:309] Setting ErrFile to fd 2...
	I0717 21:38:19.690140   13175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:19.690366   13175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	W0717 21:38:19.690485   13175 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-6342/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-6342/.minikube/config/config.json: no such file or directory
	I0717 21:38:19.691004   13175 out.go:303] Setting JSON to true
	I0717 21:38:19.691824   13175 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1246,"bootTime":1689628654,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:38:19.691886   13175 start.go:138] virtualization: kvm guest
	I0717 21:38:19.694385   13175 out.go:97] [download-only-997965] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:38:19.695971   13175 out.go:169] MINIKUBE_LOCATION=16899
	W0717 21:38:19.694476   13175 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 21:38:19.694522   13175 notify.go:220] Checking for updates...
	I0717 21:38:19.698736   13175 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:38:19.700026   13175 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 21:38:19.701423   13175 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 21:38:19.703583   13175 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 21:38:19.705934   13175 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:38:19.706137   13175 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:38:19.726422   13175 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:38:19.726515   13175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:38:20.061626   13175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-07-17 21:38:20.053474732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:38:20.061772   13175 docker.go:294] overlay module found
	I0717 21:38:20.063678   13175 out.go:97] Using the docker driver based on user configuration
	I0717 21:38:20.063699   13175 start.go:298] selected driver: docker
	I0717 21:38:20.063704   13175 start.go:880] validating driver "docker" against <nil>
	I0717 21:38:20.063800   13175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:38:20.111744   13175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-07-17 21:38:20.104226162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:38:20.111946   13175 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:38:20.112415   13175 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0717 21:38:20.112566   13175 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 21:38:20.114423   13175 out.go:169] Using Docker driver with root privileges
	I0717 21:38:20.115897   13175 cni.go:84] Creating CNI manager for ""
	I0717 21:38:20.115917   13175 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 21:38:20.115923   13175 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:38:20.115936   13175 start_flags.go:319] config:
	{Name:download-only-997965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-997965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:38:20.117462   13175 out.go:97] Starting control plane node download-only-997965 in cluster download-only-997965
	I0717 21:38:20.117482   13175 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 21:38:20.118668   13175 out.go:97] Pulling base image ...
	I0717 21:38:20.118689   13175 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 21:38:20.118836   13175 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:38:20.132940   13175 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:38:20.133083   13175 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:38:20.133168   13175 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:38:20.271103   13175 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0717 21:38:20.271127   13175 cache.go:57] Caching tarball of preloaded images
	I0717 21:38:20.271274   13175 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 21:38:20.273361   13175 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 21:38:20.273380   13175 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:38:20.437610   13175 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0717 21:38:37.789247   13175 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:38:37.789317   13175 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:38:38.103663   13175 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 21:38:38.668307   13175 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0717 21:38:38.668642   13175 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/download-only-997965/config.json ...
	I0717 21:38:38.668672   13175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/download-only-997965/config.json: {Name:mke447644eed51c59c1b07c7ea07616c3e293cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:38.668872   13175 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 21:38:38.669073   13175 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/16899-6342/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-997965"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (22.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-997965 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-997965 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (22.2839924s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (22.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-997965
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-997965: exit status 85 (53.246769ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-997965 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC |          |
	|         | -p download-only-997965        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-997965 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC |          |
	|         | -p download-only-997965        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:38:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:38:42.007663   13363 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:38:42.007779   13363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:42.007808   13363 out.go:309] Setting ErrFile to fd 2...
	I0717 21:38:42.007815   13363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:42.008012   13363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	W0717 21:38:42.008113   13363 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-6342/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-6342/.minikube/config/config.json: no such file or directory
	I0717 21:38:42.008494   13363 out.go:303] Setting JSON to true
	I0717 21:38:42.009269   13363 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1268,"bootTime":1689628654,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:38:42.009323   13363 start.go:138] virtualization: kvm guest
	I0717 21:38:42.011373   13363 out.go:97] [download-only-997965] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:38:42.012832   13363 out.go:169] MINIKUBE_LOCATION=16899
	I0717 21:38:42.011495   13363 notify.go:220] Checking for updates...
	I0717 21:38:42.015464   13363 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:38:42.016800   13363 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 21:38:42.018209   13363 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 21:38:42.019466   13363 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 21:38:42.021694   13363 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:38:42.022089   13363 config.go:182] Loaded profile config "download-only-997965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0717 21:38:42.022151   13363 start.go:788] api.Load failed for download-only-997965: filestore "download-only-997965": Docker machine "download-only-997965" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:38:42.022258   13363 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 21:38:42.022290   13363 start.go:788] api.Load failed for download-only-997965: filestore "download-only-997965": Docker machine "download-only-997965" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:38:42.042557   13363 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:38:42.042632   13363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:38:42.092181   13363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-07-17 21:38:42.084005874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:38:42.092264   13363 docker.go:294] overlay module found
	I0717 21:38:42.093862   13363 out.go:97] Using the docker driver based on existing profile
	I0717 21:38:42.093880   13363 start.go:298] selected driver: docker
	I0717 21:38:42.093884   13363 start.go:880] validating driver "docker" against &{Name:download-only-997965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-997965 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:38:42.094003   13363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:38:42.144440   13363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-07-17 21:38:42.136792948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:38:42.145211   13363 cni.go:84] Creating CNI manager for ""
	I0717 21:38:42.145237   13363 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 21:38:42.145253   13363 start_flags.go:319] config:
	{Name:download-only-997965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-997965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:38:42.146849   13363 out.go:97] Starting control plane node download-only-997965 in cluster download-only-997965
	I0717 21:38:42.146865   13363 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 21:38:42.148029   13363 out.go:97] Pulling base image ...
	I0717 21:38:42.148054   13363 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:38:42.148172   13363 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:38:42.162416   13363 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:38:42.162541   13363 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:38:42.162558   13363 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 21:38:42.162564   13363 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 21:38:42.162571   13363 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 21:38:42.714510   13363 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0717 21:38:42.714535   13363 cache.go:57] Caching tarball of preloaded images
	I0717 21:38:42.714656   13363 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:38:42.716422   13363 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 21:38:42.716439   13363 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:38:42.870765   13363 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:1f83873e0026e1a370942079b65e1960 -> /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0717 21:39:00.070253   13363 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:39:00.070341   13363 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:39:00.914257   13363 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0717 21:39:00.914380   13363 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/download-only-997965/config.json ...
	I0717 21:39:00.914560   13363 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:39:00.914725   13363 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16899-6342/.minikube/cache/linux/amd64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-997965"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-997965
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.13s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-997614 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-997614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-997614
--- PASS: TestDownloadOnlyKic (1.13s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-381755 --alsologtostderr --binary-mirror http://127.0.0.1:39781 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-381755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-381755
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (59.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-923943 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-923943 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (56.066598914s)
helpers_test.go:175: Cleaning up "offline-containerd-923943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-923943
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-923943: (3.237134557s)
--- PASS: TestOffline (59.30s)

                                                
                                    
x
+
TestAddons/Setup (124.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-767732 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-767732 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m4.217553411s)
--- PASS: TestAddons/Setup (124.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 15.688845ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nxx89" [dd754cd0-a8b3-4fda-a55d-363b4313c917] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007773424s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5rm9q" [0f0423ae-d19b-45ee-81fb-c07ab3e6a269] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008062975s
addons_test.go:316: (dbg) Run:  kubectl --context addons-767732 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-767732 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-767732 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.95469301s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 ip
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-767732 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-767732 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-767732 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5e48c278-9f88-45e8-88dc-691979b3b4a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5e48c278-9f88-45e8-88dc-691979b3b4a1] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.252993979s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-767732 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-767732 addons disable ingress-dns --alsologtostderr -v=1: (1.650987047s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-767732 addons disable ingress --alsologtostderr -v=1: (7.60599569s)
--- PASS: TestAddons/parallel/Ingress (21.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-md25z" [a2fc75ef-4d8e-4572-8abf-82f5a4e38576] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017559456s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-767732
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-767732: (5.71733952s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 14.041053ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-kvcfl" [34b7b80d-c7ac-4207-a350-9e7edf1f2403] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008427335s
addons_test.go:391: (dbg) Run:  kubectl --context addons-767732 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-767732 addons disable metrics-server --alsologtostderr -v=1: (1.077542963s)
--- PASS: TestAddons/parallel/MetricsServer (6.17s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.387986ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-cd2kf" [22851b1a-61e8-459f-b2ff-03ac31652630] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007108751s
addons_test.go:449: (dbg) Run:  kubectl --context addons-767732 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-767732 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.327940966s)
addons_test.go:454: kubectl --context addons-767732 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (102.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.722524ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-767732 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/07/17 21:41:33 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-767732 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a9c8db1b-37aa-4087-a604-7e49860ebb2e] Pending
helpers_test.go:344: "task-pv-pod" [a9c8db1b-37aa-4087-a604-7e49860ebb2e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a9c8db1b-37aa-4087-a604-7e49860ebb2e] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005699083s
addons_test.go:560: (dbg) Run:  kubectl --context addons-767732 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-767732 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-767732 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-767732 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-767732 delete pod task-pv-pod: (1.457272594s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-767732 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-767732 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-767732 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2b68d813-fbc3-4a56-82a7-efc2e34b44b3] Pending
helpers_test.go:344: "task-pv-pod-restore" [2b68d813-fbc3-4a56-82a7-efc2e34b44b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2b68d813-fbc3-4a56-82a7-efc2e34b44b3] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.005862763s
addons_test.go:602: (dbg) Run:  kubectl --context addons-767732 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-767732 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-767732 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-767732 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.51194242s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-767732 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (102.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-767732 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-767732 --alsologtostderr -v=1: (1.650142909s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-7hw8r" [eefa932e-f884-4f10-83fc-be0d0d78ce0b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-7hw8r" [eefa932e-f884-4f10-83fc-be0d0d78ce0b] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.007616043s
--- PASS: TestAddons/parallel/Headlamp (18.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-97tjt" [67d98441-b168-4f65-8381-110668394f49] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006633688s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-767732
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-767732 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-767732 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-767732
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-767732: (11.882837539s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-767732
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-767732
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-767732
--- PASS: TestAddons/StoppedEnableDisable (12.09s)

                                                
                                    
x
+
TestCertOptions (34.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-030669 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-030669 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.06605429s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-030669 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-030669 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-030669 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-030669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-030669
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-030669: (1.813944272s)
--- PASS: TestCertOptions (34.41s)

                                                
                                    
x
+
TestCertExpiration (220.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-413031 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-413031 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (25.757922046s)
E0717 22:08:19.581288   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-413031 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-413031 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (12.361901335s)
helpers_test.go:175: Cleaning up "cert-expiration-413031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-413031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-413031: (2.309996979s)
--- PASS: TestCertExpiration (220.43s)

                                                
                                    
x
+
TestForceSystemdFlag (32.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-981449 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-981449 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (28.872481647s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-981449 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-981449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-981449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-981449: (3.59563298s)
--- PASS: TestForceSystemdFlag (32.72s)

                                                
                                    
x
+
TestForceSystemdEnv (31.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-324457 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-324457 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.402394427s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-324457 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-324457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-324457
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-324457: (4.915288901s)
--- PASS: TestForceSystemdEnv (31.61s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.88s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.88s)

                                                
                                    
x
+
TestErrorSpam/setup (23.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-312428 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-312428 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-312428 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-312428 --driver=docker  --container-runtime=containerd: (23.656016113s)
--- PASS: TestErrorSpam/setup (23.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 unpause
--- PASS: TestErrorSpam/unpause (1.41s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 stop: (1.176216687s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312428 --log_dir /tmp/nospam-312428 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16899-6342/.minikube/files/etc/test/nested/copy/13163/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773235 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-773235 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.63642147s)
--- PASS: TestFunctional/serial/StartWithProxy (48.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (11.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773235 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-773235 --alsologtostderr -v=8: (11.055016374s)
functional_test.go:659: soft start took 11.055675982s for "functional-773235" cluster.
--- PASS: TestFunctional/serial/SoftStart (11.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-773235 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 cache add registry.k8s.io/pause:3.3: (1.036934584s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-773235 /tmp/TestFunctionalserialCacheCmdcacheadd_local900429689/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cache add minikube-local-cache-test:functional-773235
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 cache add minikube-local-cache-test:functional-773235: (2.545130068s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cache delete minikube-local-cache-test:functional-773235
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-773235
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (249.46343ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 kubectl -- --context functional-773235 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-773235 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773235 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 21:46:10.734931   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:10.740568   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:10.750796   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:10.771015   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:10.811230   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:10.891499   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:11.051851   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:11.372404   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:12.013406   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:13.294072   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:15.855278   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:20.976347   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 21:46:31.217494   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-773235 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.356239624s)
functional_test.go:757: restart took 53.356353475s for "functional-773235" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (53.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-773235 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 logs: (1.253527824s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 logs --file /tmp/TestFunctionalserialLogsFileCmd2542700727/001/logs.txt
E0717 21:46:51.697958   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 logs --file /tmp/TestFunctionalserialLogsFileCmd2542700727/001/logs.txt: (1.274056925s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-773235 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-773235
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-773235: exit status 115 (300.296743ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31805 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-773235 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 config get cpus: exit status 14 (71.926925ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 config get cpus: exit status 14 (43.735106ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-773235 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-773235 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 52637: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-773235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (133.347574ms)

                                                
                                                
-- stdout --
	* [functional-773235] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:47:09.828763   50423 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:47:09.828870   50423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:47:09.828878   50423 out.go:309] Setting ErrFile to fd 2...
	I0717 21:47:09.828883   50423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:47:09.829077   50423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 21:47:09.829551   50423 out.go:303] Setting JSON to false
	I0717 21:47:09.830667   50423 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1776,"bootTime":1689628654,"procs":499,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:47:09.830723   50423 start.go:138] virtualization: kvm guest
	I0717 21:47:09.833216   50423 out.go:177] * [functional-773235] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:47:09.834614   50423 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:47:09.834613   50423 notify.go:220] Checking for updates...
	I0717 21:47:09.835972   50423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:47:09.837266   50423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 21:47:09.838560   50423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 21:47:09.839765   50423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:47:09.841005   50423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:47:09.843385   50423 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:47:09.844227   50423 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:47:09.864301   50423 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:47:09.864396   50423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:47:09.917133   50423 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2023-07-17 21:47:09.908842667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:47:09.917281   50423 docker.go:294] overlay module found
	I0717 21:47:09.919216   50423 out.go:177] * Using the docker driver based on existing profile
	I0717 21:47:09.920501   50423 start.go:298] selected driver: docker
	I0717 21:47:09.920525   50423 start.go:880] validating driver "docker" against &{Name:functional-773235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-773235 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:47:09.920635   50423 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:47:09.922560   50423 out.go:177] 
	W0717 21:47:09.923845   50423 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 21:47:09.925039   50423 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773235 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-773235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (139.116273ms)

                                                
                                                
-- stdout --
	* [functional-773235] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:47:11.035927   50919 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:47:11.036338   50919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:47:11.036394   50919 out.go:309] Setting ErrFile to fd 2...
	I0717 21:47:11.036416   50919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:47:11.037224   50919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 21:47:11.037772   50919 out.go:303] Setting JSON to false
	I0717 21:47:11.038864   50919 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1777,"bootTime":1689628654,"procs":499,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:47:11.038931   50919 start.go:138] virtualization: kvm guest
	I0717 21:47:11.040653   50919 out.go:177] * [functional-773235] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	I0717 21:47:11.042229   50919 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:47:11.043592   50919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:47:11.042239   50919 notify.go:220] Checking for updates...
	I0717 21:47:11.044867   50919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 21:47:11.046008   50919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 21:47:11.047131   50919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:47:11.048292   50919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:47:11.049679   50919 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:47:11.050085   50919 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:47:11.075549   50919 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:47:11.075640   50919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:47:11.125474   50919 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:68 SystemTime:2023-07-17 21:47:11.117183447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:47:11.125569   50919 docker.go:294] overlay module found
	I0717 21:47:11.127390   50919 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 21:47:11.128761   50919 start.go:298] selected driver: docker
	I0717 21:47:11.128771   50919 start.go:880] validating driver "docker" against &{Name:functional-773235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-773235 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:47:11.128854   50919 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:47:11.130743   50919 out.go:177] 
	W0717 21:47:11.132173   50919 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 21:47:11.133417   50919 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-773235 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-773235 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-stn27" [e3dfe5f5-a34e-4922-8279-ea74d54cca0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-stn27" [e3dfe5f5-a34e-4922-8279-ea74d54cca0b] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.069571448s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31605
functional_test.go:1674: http://192.168.49.2:31605: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-stn27

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31605
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1473447b-66a2-4711-bc0a-c7df49cfa179] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014045871s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-773235 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-773235 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-773235 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773235 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3d581627-5a95-4379-baf3-c7594ba9be2f] Pending
helpers_test.go:344: "sp-pod" [3d581627-5a95-4379-baf3-c7594ba9be2f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3d581627-5a95-4379-baf3-c7594ba9be2f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.017616612s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-773235 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-773235 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-773235 delete -f testdata/storage-provisioner/pod.yaml: (1.477183085s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773235 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2870c94c-5921-4cc0-add5-b64e193d270d] Pending
helpers_test.go:344: "sp-pod" [2870c94c-5921-4cc0-add5-b64e193d270d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2870c94c-5921-4cc0-add5-b64e193d270d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.046757238s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-773235 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh -n functional-773235 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 cp functional-773235:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2028608099/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh -n functional-773235 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-773235 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-vdh2j" [f214d235-1a4c-4318-bf39-36d428302bff] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-vdh2j" [f214d235-1a4c-4318-bf39-36d428302bff] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.007076294s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-773235 exec mysql-7db894d786-vdh2j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-773235 exec mysql-7db894d786-vdh2j -- mysql -ppassword -e "show databases;": exit status 1 (176.139349ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-773235 exec mysql-7db894d786-vdh2j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-773235 exec mysql-7db894d786-vdh2j -- mysql -ppassword -e "show databases;": exit status 1 (134.09898ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-773235 exec mysql-7db894d786-vdh2j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-773235 exec mysql-7db894d786-vdh2j -- mysql -ppassword -e "show databases;": exit status 1 (136.61994ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-773235 exec mysql-7db894d786-vdh2j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13163/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo cat /etc/test/nested/copy/13163/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13163.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo cat /etc/ssl/certs/13163.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13163.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo cat /usr/share/ca-certificates/13163.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/131632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo cat /etc/ssl/certs/131632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/131632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo cat /usr/share/ca-certificates/131632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-773235 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh "sudo systemctl is-active docker": exit status 1 (279.690336ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh "sudo systemctl is-active crio": exit status 1 (262.380983ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-773235 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-773235 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-85h7g" [03a6e21e-d71d-433a-9fdc-d896752a522c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-85h7g" [03a6e21e-d71d-433a-9fdc-d896752a522c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.013862439s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-773235 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-773235 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-773235 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-773235 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 48306: os: process already finished
helpers_test.go:502: unable to terminate pid 47928: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-773235 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-773235 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [dd9d8db2-ddf9-43b6-8e75-d3a8dc81fa0f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [dd9d8db2-ddf9-43b6-8e75-d3a8dc81fa0f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.036906827s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773235 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-773235
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-773235
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773235 image ls --format short --alsologtostderr:
I0717 21:47:25.765437   55085 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:25.765563   55085 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:25.765571   55085 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:25.765575   55085 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:25.765774   55085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
I0717 21:47:25.766291   55085 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:25.766382   55085 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:25.766738   55085 cli_runner.go:164] Run: docker container inspect functional-773235 --format={{.State.Status}}
I0717 21:47:25.782429   55085 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:25.782471   55085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773235
I0717 21:47:25.797877   55085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/functional-773235/id_rsa Username:docker}
I0717 21:47:25.887534   55085 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773235 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:86b6af | 102MB  |
| registry.k8s.io/kube-scheduler              | v1.27.3            | sha256:41697c | 18.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-773235  | sha256:17f002 | 1.01kB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| gcr.io/google-containers/addon-resizer      | functional-773235  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-controller-manager     | v1.27.3            | sha256:7cffc0 | 31MB   |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b0b1fa | 27.7MB |
| docker.io/library/nginx                     | latest             | sha256:021283 | 70.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/kube-apiserver              | v1.27.3            | sha256:08a0c9 | 33.4MB |
| registry.k8s.io/kube-proxy                  | v1.27.3            | sha256:578054 | 23.9MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/nginx                     | alpine             | sha256:493752 | 17MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773235 image ls --format table --alsologtostderr:
I0717 21:47:26.348485   55331 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:26.348591   55331 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:26.348601   55331 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:26.348605   55331 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:26.348809   55331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
I0717 21:47:26.349359   55331 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:26.349455   55331 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:26.349804   55331 cli_runner.go:164] Run: docker container inspect functional-773235 --format={{.State.Status}}
I0717 21:47:26.366421   55331 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:26.366471   55331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773235
I0717 21:47:26.383858   55331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/functional-773235/id_rsa Username:docker}
I0717 21:47:26.472086   55331 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773235 image ls --format json --alsologtostderr:
[{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:17f002b37b9c98bc43a26f870618879c714f7f98f46733c2f93f2a55e34b2107","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-773235"],"size":"1006"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256
:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"101639218"},{"id":"sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"33364386"},{"id":"sha256:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"23897400"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/paus
e@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"30973055"},{"id":"sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"18231737"},{"id":"sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"27731571"},{"id":"sha256:4937520ae206c8969734d9a659fc1e659
4d9b22b9340bf0796defbea0c92dd02","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16978757"},{"id":"sha256:021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef"],"repoTags":["docker.io/library/nginx:latest"],"size":"70601656"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-773235"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410"
,"repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773235 image ls --format json --alsologtostderr:
I0717 21:47:26.147914   55256 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:26.148022   55256 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:26.148030   55256 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:26.148034   55256 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:26.148563   55256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
I0717 21:47:26.149623   55256 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:26.149739   55256 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:26.150127   55256 cli_runner.go:164] Run: docker container inspect functional-773235 --format={{.State.Status}}
I0717 21:47:26.167384   55256 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:26.167423   55256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773235
I0717 21:47:26.183302   55256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/functional-773235/id_rsa Username:docker}
I0717 21:47:26.271942   55256 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773235 image ls --format yaml --alsologtostderr:
- id: sha256:4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
repoTags:
- docker.io/library/nginx:alpine
size: "16978757"
- id: sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "101639218"
- id: sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "18231737"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "27731571"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
repoTags:
- docker.io/library/nginx:latest
size: "70601656"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:17f002b37b9c98bc43a26f870618879c714f7f98f46733c2f93f2a55e34b2107
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-773235
size: "1006"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "30973055"
- id: sha256:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "23897400"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-773235
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "33364386"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773235 image ls --format yaml --alsologtostderr:
I0717 21:47:25.948246   55129 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:25.948347   55129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:25.948354   55129 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:25.948358   55129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:25.948574   55129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
I0717 21:47:25.949108   55129 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:25.949200   55129 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:25.949556   55129 cli_runner.go:164] Run: docker container inspect functional-773235 --format={{.State.Status}}
I0717 21:47:25.966902   55129 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:25.966970   55129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773235
I0717 21:47:25.983723   55129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/functional-773235/id_rsa Username:docker}
I0717 21:47:26.071612   55129 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh pgrep buildkitd: exit status 1 (235.162196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image build -t localhost/my-image:functional-773235 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 image build -t localhost/my-image:functional-773235 testdata/build --alsologtostderr: (4.442157695s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773235 image build -t localhost/my-image:functional-773235 testdata/build --alsologtostderr:
I0717 21:47:26.200047   55277 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:26.200246   55277 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:26.200256   55277 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:26.200261   55277 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:26.200445   55277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
I0717 21:47:26.200946   55277 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:26.201448   55277 config.go:182] Loaded profile config "functional-773235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:26.201820   55277 cli_runner.go:164] Run: docker container inspect functional-773235 --format={{.State.Status}}
I0717 21:47:26.217516   55277 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:26.217565   55277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773235
I0717 21:47:26.233263   55277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/functional-773235/id_rsa Username:docker}
I0717 21:47:26.323923   55277 build_images.go:151] Building image from path: /tmp/build.1886388258.tar
I0717 21:47:26.323970   55277 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 21:47:26.332609   55277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1886388258.tar
I0717 21:47:26.335568   55277 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1886388258.tar: stat -c "%s %y" /var/lib/minikube/build/build.1886388258.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1886388258.tar': No such file or directory
I0717 21:47:26.335594   55277 ssh_runner.go:362] scp /tmp/build.1886388258.tar --> /var/lib/minikube/build/build.1886388258.tar (3072 bytes)
I0717 21:47:26.357789   55277 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1886388258
I0717 21:47:26.365865   55277 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1886388258 -xf /var/lib/minikube/build/build.1886388258.tar
I0717 21:47:26.373738   55277 containerd.go:378] Building image: /var/lib/minikube/build/build.1886388258
I0717 21:47:26.373805   55277 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1886388258 --local dockerfile=/var/lib/minikube/build/build.1886388258 --output type=image,name=localhost/my-image:functional-773235
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 1.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:bec9a51177a2a3ddeec00d45a98b06726ae12c277fa8b963ea8c1b7ef25248b3 done
#8 exporting config sha256:0c1f7c65e25ce3f91c753b042585264961062b3d3ec95aa973191ff53eb8775d done
#8 naming to localhost/my-image:functional-773235 done
#8 DONE 0.1s
I0717 21:47:30.580246   55277 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1886388258 --local dockerfile=/var/lib/minikube/build/build.1886388258 --output type=image,name=localhost/my-image:functional-773235: (4.206410342s)
I0717 21:47:30.580337   55277 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1886388258
I0717 21:47:30.589219   55277 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1886388258.tar
I0717 21:47:30.596429   55277 build_images.go:207] Built localhost/my-image:functional-773235 from /tmp/build.1886388258.tar
I0717 21:47:30.596455   55277 build_images.go:123] succeeded building to: functional-773235
I0717 21:47:30.596460   55277 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls
E0717 21:47:32.659108   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.596569376s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-773235
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image load --daemon gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 image load --daemon gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr: (3.568814853s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image load --daemon gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 image load --daemon gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr: (4.759707856s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 service list -o json
functional_test.go:1493: Took "381.105435ms" to run "out/minikube-linux-amd64 -p functional-773235 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31965
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31965
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-773235 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.145.77 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-773235 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.590606874s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-773235
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image load --daemon gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 image load --daemon gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr: (3.999023241s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.82s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "252.324608ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "39.352483ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "252.977324ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "39.178425ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdany-port2085954000/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689630431138093884" to /tmp/TestFunctionalparallelMountCmdany-port2085954000/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689630431138093884" to /tmp/TestFunctionalparallelMountCmdany-port2085954000/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689630431138093884" to /tmp/TestFunctionalparallelMountCmdany-port2085954000/001/test-1689630431138093884
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.249641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 21:47 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 21:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 21:47 test-1689630431138093884
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh cat /mount-9p/test-1689630431138093884
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-773235 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [71b1432d-6355-4d9b-9d8d-d7c5e4aeda1c] Pending
helpers_test.go:344: "busybox-mount" [71b1432d-6355-4d9b-9d8d-d7c5e4aeda1c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [71b1432d-6355-4d9b-9d8d-d7c5e4aeda1c] Running
helpers_test.go:344: "busybox-mount" [71b1432d-6355-4d9b-9d8d-d7c5e4aeda1c] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [71b1432d-6355-4d9b-9d8d-d7c5e4aeda1c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.071377416s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-773235 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdany-port2085954000/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 update-context --alsologtostderr -v=2
2023/07/17 21:47:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image save gcr.io/google-containers/addon-resizer:functional-773235 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-773235 image save gcr.io/google-containers/addon-resizer:functional-773235 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.71632168s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image rm gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-773235
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 image save --daemon gcr.io/google-containers/addon-resizer:functional-773235 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-773235
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdspecific-port2792993218/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.176702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdspecific-port2792993218/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh "sudo umount -f /mount-9p": exit status 1 (256.376962ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-773235 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdspecific-port2792993218/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3794220986/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3794220986/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3794220986/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T" /mount1: exit status 1 (358.613643ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773235 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-773235 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3794220986/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3794220986/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3794220986/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-773235
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-773235
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-773235
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-021291 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0717 21:48:54.579920   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-021291 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m23.486063623s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-021291 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-021291 addons enable ingress --alsologtostderr -v=5: (12.765458248s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-021291 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-021291 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-021291 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.80269321s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-021291 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-021291 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9793f5cb-3d92-4772-9edc-e70f4345dd9e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9793f5cb-3d92-4772-9edc-e70f4345dd9e] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.005231608s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-021291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-021291 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-021291 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-021291 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-021291 addons disable ingress-dns --alsologtostderr -v=1: (3.293604936s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-021291 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-021291 addons disable ingress --alsologtostderr -v=1: (7.399411446s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-409374 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-409374 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.180185249s)
--- PASS: TestJSONOutput/start/Command (50.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-409374 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-409374 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-409374 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-409374 --output=json --user=testUser: (5.656176855s)
--- PASS: TestJSONOutput/stop/Command (5.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-527690 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-527690 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.557361ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4e3a2e1b-e2e6-4659-93bf-57a95d896586","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-527690] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fbc900c-822b-435d-9bde-7293f1c54cc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"7f163784-fd52-4219-ba85-0b60e77a555f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55ecf602-c793-4458-a4e5-90259079bfb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig"}}
	{"specversion":"1.0","id":"c449ef65-a325-46c6-847b-2d4a119ff652","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube"}}
	{"specversion":"1.0","id":"ceb5a426-a30f-4b13-a3f0-7c495ea74aec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e0fcd9fc-6240-4053-96c0-00f1da698e36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a9413451-db82-4372-938c-0accec73f69b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-527690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-527690
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-493222 --network=
E0717 21:51:38.420145   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-493222 --network=: (39.95159971s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-493222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-493222
E0717 21:51:56.537830   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:56.543094   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:56.553357   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:56.573680   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:56.613958   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:56.694238   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-493222: (1.93774103s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-126838 --network=bridge
E0717 21:51:56.855295   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:57.175577   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:57.816491   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:51:59.096965   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:52:01.658015   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:52:06.778167   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:52:17.018830   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-126838 --network=bridge: (24.798232789s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-126838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-126838
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-126838: (1.849203536s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.66s)

                                                
                                    
x
+
TestKicExistingNetwork (26.74s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-574596 --network=existing-network
E0717 21:52:37.499172   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-574596 --network=existing-network: (24.802897269s)
helpers_test.go:175: Cleaning up "existing-network-574596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-574596
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-574596: (1.813583736s)
--- PASS: TestKicExistingNetwork (26.74s)

                                                
                                    
x
+
TestKicCustomSubnet (24.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-884534 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-884534 --subnet=192.168.60.0/24: (22.159120601s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-884534 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-884534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-884534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-884534: (2.012075899s)
--- PASS: TestKicCustomSubnet (24.19s)

                                                
                                    
x
+
TestKicStaticIP (23.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-636191 --static-ip=192.168.200.200
E0717 21:53:18.460197   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-636191 --static-ip=192.168.200.200: (21.409685916s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-636191 ip
helpers_test.go:175: Cleaning up "static-ip-636191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-636191
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-636191: (2.054383368s)
--- PASS: TestKicStaticIP (23.58s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (48.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-045388 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-045388 --driver=docker  --container-runtime=containerd: (23.132869677s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-048515 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-048515 --driver=docker  --container-runtime=containerd: (20.269453166s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-045388
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-048515
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-048515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-048515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-048515: (1.775187178s)
helpers_test.go:175: Cleaning up "first-045388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-045388
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-045388: (2.136086732s)
--- PASS: TestMinikubeProfile (48.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-704441 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-704441 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.185074262s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-704441 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-720857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0717 21:54:31.969546   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:31.974841   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:31.985098   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:32.005381   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:32.045637   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:32.125949   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:32.286358   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:32.606967   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:33.247891   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:54:34.528369   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-720857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.065179155s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-720857 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-704441 --alsologtostderr -v=5
E0717 21:54:37.088700   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-704441 --alsologtostderr -v=5: (1.583036996s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-720857 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-720857
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-720857: (1.1783098s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-720857
E0717 21:54:40.380351   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 21:54:42.209843   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-720857: (6.032803137s)
--- PASS: TestMountStart/serial/RestartStopped (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-720857 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703057 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0717 21:54:52.450286   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:55:12.931001   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703057 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.479676349s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- rollout status deployment/busybox
E0717 21:55:53.891910   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-703057 -- rollout status deployment/busybox: (3.889443951s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-7sq27 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-q9jpq -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-7sq27 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-q9jpq -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-7sq27 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-q9jpq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-7sq27 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-7sq27 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-q9jpq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703057 -- exec busybox-67b7f59bb-q9jpq -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-703057 -v 3 --alsologtostderr
E0717 21:56:10.735193   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-703057 -v 3 --alsologtostderr: (17.795855185s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.36s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp testdata/cp-test.txt multinode-703057:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1200769560/001/cp-test_multinode-703057.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057:/home/docker/cp-test.txt multinode-703057-m02:/home/docker/cp-test_multinode-703057_multinode-703057-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m02 "sudo cat /home/docker/cp-test_multinode-703057_multinode-703057-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057:/home/docker/cp-test.txt multinode-703057-m03:/home/docker/cp-test_multinode-703057_multinode-703057-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m03 "sudo cat /home/docker/cp-test_multinode-703057_multinode-703057-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp testdata/cp-test.txt multinode-703057-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1200769560/001/cp-test_multinode-703057-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057-m02:/home/docker/cp-test.txt multinode-703057:/home/docker/cp-test_multinode-703057-m02_multinode-703057.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057 "sudo cat /home/docker/cp-test_multinode-703057-m02_multinode-703057.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057-m02:/home/docker/cp-test.txt multinode-703057-m03:/home/docker/cp-test_multinode-703057-m02_multinode-703057-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m03 "sudo cat /home/docker/cp-test_multinode-703057-m02_multinode-703057-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp testdata/cp-test.txt multinode-703057-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1200769560/001/cp-test_multinode-703057-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057-m03:/home/docker/cp-test.txt multinode-703057:/home/docker/cp-test_multinode-703057-m03_multinode-703057.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057 "sudo cat /home/docker/cp-test_multinode-703057-m03_multinode-703057.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 cp multinode-703057-m03:/home/docker/cp-test.txt multinode-703057-m02:/home/docker/cp-test_multinode-703057-m03_multinode-703057-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 ssh -n multinode-703057-m02 "sudo cat /home/docker/cp-test_multinode-703057-m03_multinode-703057-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-703057 node stop m03: (1.171981767s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703057 status: exit status 7 (422.224862ms)

                                                
                                                
-- stdout --
	multinode-703057
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-703057-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-703057-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr: exit status 7 (435.733108ms)

                                                
                                                
-- stdout --
	multinode-703057
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-703057-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-703057-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:56:27.115335  113654 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:56:27.115465  113654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:56:27.115476  113654 out.go:309] Setting ErrFile to fd 2...
	I0717 21:56:27.115482  113654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:56:27.115686  113654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 21:56:27.115889  113654 out.go:303] Setting JSON to false
	I0717 21:56:27.115925  113654 mustload.go:65] Loading cluster: multinode-703057
	I0717 21:56:27.116021  113654 notify.go:220] Checking for updates...
	I0717 21:56:27.116289  113654 config.go:182] Loaded profile config "multinode-703057": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:56:27.116304  113654 status.go:255] checking status of multinode-703057 ...
	I0717 21:56:27.116696  113654 cli_runner.go:164] Run: docker container inspect multinode-703057 --format={{.State.Status}}
	I0717 21:56:27.134522  113654 status.go:330] multinode-703057 host status = "Running" (err=<nil>)
	I0717 21:56:27.134549  113654 host.go:66] Checking if "multinode-703057" exists ...
	I0717 21:56:27.134871  113654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-703057
	I0717 21:56:27.149628  113654 host.go:66] Checking if "multinode-703057" exists ...
	I0717 21:56:27.149863  113654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:56:27.149906  113654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-703057
	I0717 21:56:27.165439  113654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/multinode-703057/id_rsa Username:docker}
	I0717 21:56:27.256277  113654 ssh_runner.go:195] Run: systemctl --version
	I0717 21:56:27.259849  113654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:56:27.269259  113654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:56:27.321193  113654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-07-17 21:56:27.31265214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:56:27.321860  113654 kubeconfig.go:92] found "multinode-703057" server: "https://192.168.58.2:8443"
	I0717 21:56:27.321884  113654 api_server.go:166] Checking apiserver status ...
	I0717 21:56:27.321919  113654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:56:27.331727  113654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	I0717 21:56:27.339738  113654 api_server.go:182] apiserver freezer: "6:freezer:/docker/00818588955e69c44cd45815e1890d8a8ea652bb6916885d2bffb92eef1e0661/kubepods/burstable/pod4eb8712149d81fd8af923afea3167bab/4864cf00ee3c743ad697809052e930360b77d38bfa5ad7f6488fe796b836b91c"
	I0717 21:56:27.339847  113654 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/00818588955e69c44cd45815e1890d8a8ea652bb6916885d2bffb92eef1e0661/kubepods/burstable/pod4eb8712149d81fd8af923afea3167bab/4864cf00ee3c743ad697809052e930360b77d38bfa5ad7f6488fe796b836b91c/freezer.state
	I0717 21:56:27.346908  113654 api_server.go:204] freezer state: "THAWED"
	I0717 21:56:27.346930  113654 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 21:56:27.351046  113654 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 21:56:27.351064  113654 status.go:421] multinode-703057 apiserver status = Running (err=<nil>)
	I0717 21:56:27.351074  113654 status.go:257] multinode-703057 status: &{Name:multinode-703057 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:56:27.351089  113654 status.go:255] checking status of multinode-703057-m02 ...
	I0717 21:56:27.351303  113654 cli_runner.go:164] Run: docker container inspect multinode-703057-m02 --format={{.State.Status}}
	I0717 21:56:27.368782  113654 status.go:330] multinode-703057-m02 host status = "Running" (err=<nil>)
	I0717 21:56:27.368802  113654 host.go:66] Checking if "multinode-703057-m02" exists ...
	I0717 21:56:27.369027  113654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-703057-m02
	I0717 21:56:27.386865  113654 host.go:66] Checking if "multinode-703057-m02" exists ...
	I0717 21:56:27.387155  113654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:56:27.387193  113654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-703057-m02
	I0717 21:56:27.402431  113654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16899-6342/.minikube/machines/multinode-703057-m02/id_rsa Username:docker}
	I0717 21:56:27.487999  113654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:56:27.497673  113654 status.go:257] multinode-703057-m02 status: &{Name:multinode-703057-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:56:27.497709  113654 status.go:255] checking status of multinode-703057-m03 ...
	I0717 21:56:27.497938  113654 cli_runner.go:164] Run: docker container inspect multinode-703057-m03 --format={{.State.Status}}
	I0717 21:56:27.513386  113654 status.go:330] multinode-703057-m03 host status = "Stopped" (err=<nil>)
	I0717 21:56:27.513404  113654 status.go:343] host is not running, skipping remaining checks
	I0717 21:56:27.513414  113654 status.go:257] multinode-703057-m03 status: &{Name:multinode-703057-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.03s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-703057 node start m03 --alsologtostderr: (9.780122395s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703057
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-703057
E0717 21:56:56.539852   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-703057: (24.765795431s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703057 --wait=true -v=8 --alsologtostderr
E0717 21:57:15.814979   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:57:24.220684   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703057 --wait=true -v=8 --alsologtostderr: (1m35.655129057s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703057
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-703057 node delete m03: (4.001079991s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-703057 stop: (23.647114863s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703057 status: exit status 7 (74.341188ms)

                                                
                                                
-- stdout --
	multinode-703057
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-703057-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr: exit status 7 (72.35933ms)

                                                
                                                
-- stdout --
	multinode-703057
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-703057-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:59:06.729097  124032 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:59:06.729361  124032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:59:06.729378  124032 out.go:309] Setting ErrFile to fd 2...
	I0717 21:59:06.729386  124032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:59:06.729887  124032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 21:59:06.730155  124032 out.go:303] Setting JSON to false
	I0717 21:59:06.730212  124032 mustload.go:65] Loading cluster: multinode-703057
	I0717 21:59:06.730257  124032 notify.go:220] Checking for updates...
	I0717 21:59:06.730857  124032 config.go:182] Loaded profile config "multinode-703057": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:59:06.730876  124032 status.go:255] checking status of multinode-703057 ...
	I0717 21:59:06.731292  124032 cli_runner.go:164] Run: docker container inspect multinode-703057 --format={{.State.Status}}
	I0717 21:59:06.747504  124032 status.go:330] multinode-703057 host status = "Stopped" (err=<nil>)
	I0717 21:59:06.747519  124032 status.go:343] host is not running, skipping remaining checks
	I0717 21:59:06.747524  124032 status.go:257] multinode-703057 status: &{Name:multinode-703057 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:59:06.747544  124032 status.go:255] checking status of multinode-703057-m02 ...
	I0717 21:59:06.747754  124032 cli_runner.go:164] Run: docker container inspect multinode-703057-m02 --format={{.State.Status}}
	I0717 21:59:06.763155  124032 status.go:330] multinode-703057-m02 host status = "Stopped" (err=<nil>)
	I0717 21:59:06.763171  124032 status.go:343] host is not running, skipping remaining checks
	I0717 21:59:06.763179  124032 status.go:257] multinode-703057-m02 status: &{Name:multinode-703057-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703057 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0717 21:59:31.969567   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 21:59:59.655406   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703057 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m24.078602738s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703057 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703057
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703057-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-703057-m02 --driver=docker  --container-runtime=containerd: exit status 14 (60.199261ms)

                                                
                                                
-- stdout --
	* [multinode-703057-m02] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-703057-m02' is duplicated with machine name 'multinode-703057-m02' in profile 'multinode-703057'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703057-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703057-m03 --driver=docker  --container-runtime=containerd: (22.900893607s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-703057
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-703057: exit status 80 (249.750473ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-703057
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-703057-m03 already exists in multinode-703057-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-703057-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-703057-m03: (1.814274364s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.07s)

                                                
                                    
x
+
TestPreload (174.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-048419 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0717 22:01:10.735146   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 22:01:56.537063   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-048419 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m11.728542886s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-048419 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-048419 image pull gcr.io/k8s-minikube/busybox: (2.829489407s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-048419
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-048419: (5.650186381s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-048419 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0717 22:02:33.781439   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-048419 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m31.805924874s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-048419 image list
helpers_test.go:175: Cleaning up "test-preload-048419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-048419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-048419: (2.154419377s)
--- PASS: TestPreload (174.36s)

                                                
                                    
x
+
TestScheduledStopUnix (97.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-016787 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-016787 --memory=2048 --driver=docker  --container-runtime=containerd: (22.504910275s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-016787 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-016787 -n scheduled-stop-016787
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-016787 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-016787 --cancel-scheduled
E0717 22:04:31.968760   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-016787 -n scheduled-stop-016787
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-016787
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-016787 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-016787
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-016787: exit status 7 (57.750398ms)

                                                
                                                
-- stdout --
	scheduled-stop-016787
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-016787 -n scheduled-stop-016787
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-016787 -n scheduled-stop-016787: exit status 7 (60.987139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-016787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-016787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-016787: (4.157372852s)
--- PASS: TestScheduledStopUnix (97.84s)

                                                
                                    
x
+
TestInsufficientStorage (9.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-132935 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-132935 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.225964357s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4342b28-dd8b-4fe6-9d22-01a885eb8c41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-132935] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c9f0f28-dfef-4abf-9ab0-12f927e0a78b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"026fbb23-013d-4176-bb32-c5f4530444f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3f0574eb-a7d1-4599-be4a-4899009813da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig"}}
	{"specversion":"1.0","id":"bc5d4f24-6069-43ed-a20f-201683467ab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube"}}
	{"specversion":"1.0","id":"fb47c3c0-ef51-4fe9-be6b-daa739e2381a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8259e8c1-6faa-4bff-8a76-a5ed1a351368","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"28570ed7-da97-4850-89c4-1b734d5fbbd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"16a5ab07-7bb1-4433-bf92-98a1ca9c37c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"44018b54-b559-4d99-a834-a93febe2d790","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddae0dbf-0fd5-4e09-9d8f-69c8aa05629a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"23299f19-18dd-4384-aed0-a848be80a67c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-132935 in cluster insufficient-storage-132935","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"feb99e4b-3c8b-43bd-a637-3640d01d7563","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d21aa135-20d0-4160-89f1-dc3d3e30cec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f37cf1c-5c60-48fc-8e3b-6aef29a5e46d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-132935 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-132935 --output=json --layout=cluster: exit status 7 (244.876003ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132935","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132935","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:05:39.882047  145622 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-132935" does not appear in /home/jenkins/minikube-integration/16899-6342/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-132935 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-132935 --output=json --layout=cluster: exit status 7 (247.453973ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132935","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132935","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:05:40.129527  145710 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-132935" does not appear in /home/jenkins/minikube-integration/16899-6342/kubeconfig
	E0717 22:05:40.139071  145710 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/insufficient-storage-132935/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-132935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-132935
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-132935: (1.747384783s)
--- PASS: TestInsufficientStorage (9.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (159.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.1354230936.exe start -p running-upgrade-295219 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0717 22:06:10.734342   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.1354230936.exe start -p running-upgrade-295219 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m42.348655627s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-295219 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-295219 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.273856615s)
helpers_test.go:175: Cleaning up "running-upgrade-295219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-295219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-295219: (4.926763663s)
--- PASS: TestRunningBinaryUpgrade (159.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (373.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.766338834s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-965748
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-965748: (1.700053324s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-965748 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-965748 status --format={{.Host}}: exit status 7 (63.91388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0717 22:09:31.969470   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m49.89101821s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-965748 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (74.456209ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-965748] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-965748
	    minikube start -p kubernetes-upgrade-965748 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9657482 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-965748 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0717 22:14:31.969642   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-965748 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.259470358s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-965748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-965748
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-965748: (2.199460023s)
--- PASS: TestKubernetesUpgrade (373.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (167.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.3976587372.exe start -p stopped-upgrade-944884 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.3976587372.exe start -p stopped-upgrade-944884 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m44.496599245s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.3976587372.exe -p stopped-upgrade-944884 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.3976587372.exe -p stopped-upgrade-944884 stop: (12.646228256s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-944884 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-944884 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.330227589s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (167.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-871101 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-871101 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (179.594696ms)

                                                
                                                
-- stdout --
	* [false-871101] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:05:45.136414  147506 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:05:45.136571  147506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:05:45.136581  147506 out.go:309] Setting ErrFile to fd 2...
	I0717 22:05:45.136585  147506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:05:45.136785  147506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6342/.minikube/bin
	I0717 22:05:45.137398  147506 out.go:303] Setting JSON to false
	I0717 22:05:45.138415  147506 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2891,"bootTime":1689628654,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:05:45.138475  147506 start.go:138] virtualization: kvm guest
	I0717 22:05:45.140564  147506 out.go:177] * [false-871101] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:05:45.142036  147506 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:05:45.143520  147506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:05:45.142067  147506 notify.go:220] Checking for updates...
	I0717 22:05:45.146671  147506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	I0717 22:05:45.148177  147506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	I0717 22:05:45.151920  147506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:05:45.153198  147506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:05:45.155052  147506 config.go:182] Loaded profile config "force-systemd-flag-981449": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:05:45.155238  147506 config.go:182] Loaded profile config "offline-containerd-923943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:05:45.155386  147506 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:05:45.186298  147506 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:05:45.186409  147506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:05:45.254205  147506 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:63 SystemTime:2023-07-17 22:05:45.244772103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:05:45.254328  147506 docker.go:294] overlay module found
	I0717 22:05:45.256117  147506 out.go:177] * Using the docker driver based on user configuration
	I0717 22:05:45.257903  147506 start.go:298] selected driver: docker
	I0717 22:05:45.257915  147506 start.go:880] validating driver "docker" against <nil>
	I0717 22:05:45.257929  147506 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:05:45.260146  147506 out.go:177] 
	W0717 22:05:45.261389  147506 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0717 22:05:45.262609  147506 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-871101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-871101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-871101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-871101"

                                                
                                                
----------------------- debugLogs end: false-871101 [took: 6.59185227s] --------------------------------
helpers_test.go:175: Cleaning up "false-871101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-871101
--- PASS: TestNetworkPlugins/group/false (6.92s)

                                                
                                    
x
+
TestPause/serial/Start (56.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-939868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-939868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.763972258s)
--- PASS: TestPause/serial/Start (56.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940248 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-940248 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (80.467751ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-940248] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6342/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6342/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940248 --driver=docker  --container-runtime=containerd
E0717 22:06:56.536888   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940248 --driver=docker  --container-runtime=containerd: (28.303478909s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-940248 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940248 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940248 --no-kubernetes --driver=docker  --container-runtime=containerd: (24.552294925s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-940248 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-940248 status -o json: exit status 2 (260.905106ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-940248","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-940248
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-940248: (1.847505743s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (12.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-939868 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-939868 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (12.309600519s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (12.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-939868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-939868 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-939868 --output=json --layout=cluster: exit status 2 (306.898513ms)

                                                
                                                
-- stdout --
	{"Name":"pause-939868","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-939868","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-939868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-939868 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-939868 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-939868 --alsologtostderr -v=5: (2.766118701s)
--- PASS: TestPause/serial/DeletePaused (2.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-939868
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-939868: exit status 1 (15.852087ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-939868: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940248 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940248 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.787581109s)
--- PASS: TestNoKubernetes/serial/Start (6.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-940248 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-940248 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.976949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-940248
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-940248: (1.197028268s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940248 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940248 --driver=docker  --container-runtime=containerd: (5.958181458s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-940248 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-940248 "sudo systemctl is-active --quiet service kubelet": exit status 1 (248.802549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-944884
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-944884: (1.677172262s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (49.717149597s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (48.782927653s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-871101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-871101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-nsh8c" [99a5ccdb-ae56-43de-ac79-267c6cd82428] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-nsh8c" [99a5ccdb-ae56-43de-ac79-267c6cd82428] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006195614s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-871101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m7.86610753s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l6plw" [066238fa-04e0-434d-aaa8-1ce2b6955698] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015106195s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-871101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-871101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mkf74" [c50b993c-9747-4748-8824-714f9b618719] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 22:11:10.735141   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-mkf74" [c50b993c-9747-4748-8824-714f9b618719] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006280808s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-871101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (47.614869915s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0717 22:11:56.536264   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m14.370498498s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v6xtg" [8a4a94ab-935e-49fd-995f-9d3011a845ba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019266215s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-871101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-871101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mcj6w" [055f63cf-be05-47f5-81f8-c3cb87396e13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-mcj6w" [055f63cf-be05-47f5-81f8-c3cb87396e13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.006783748s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-871101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-871101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-2d7xl" [5edada18-b57e-4a45-a1df-4f28bb77d1b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-2d7xl" [5edada18-b57e-4a45-a1df-4f28bb77d1b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.006342399s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-871101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-871101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.172188064s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-871101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (37.478722343s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-871101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-871101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jjlhq" [c93c35dd-8816-4e2a-a10b-5f37f03411a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-jjlhq" [c93c35dd-8816-4e2a-a10b-5f37f03411a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00650652s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-871101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-012911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-012911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m12.957809553s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-871101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-871101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-z69qh" [64f4dca2-d867-43a1-b247-0a82e9d7c661] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-z69qh" [64f4dca2-d867-43a1-b247-0a82e9d7c661] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.007101543s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2jc7w" [e8018061-f5b0-4684-a705-0eb7f55f0708] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.013665564s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-871101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-871101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-871101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-8jvf9" [e6a3b1b0-744e-4e12-b5d3-155146c7da2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-8jvf9" [e6a3b1b0-744e-4e12-b5d3-155146c7da2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006889974s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-871101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-871101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)
E0717 22:21:56.537030   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 22:22:08.028476   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:22:22.152183   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-295620 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-295620 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m11.803646457s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-929366 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-929366 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m19.65755464s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-459036 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-459036 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (50.717968782s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-295620 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa27b60b-2a3a-45ea-ac2c-d56d26f61ad2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa27b60b-2a3a-45ea-ac2c-d56d26f61ad2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.013706334s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-295620 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-295620 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-295620 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-295620 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-295620 --alsologtostderr -v=3: (11.834843973s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-929366 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0bfb1007-f186-4050-916c-d469d065ee09] Pending
helpers_test.go:344: "busybox" [0bfb1007-f186-4050-916c-d469d065ee09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 22:15:33.058807   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:15:33.064134   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:15:33.074365   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:15:33.094615   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:15:33.134869   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
helpers_test.go:344: "busybox" [0bfb1007-f186-4050-916c-d469d065ee09] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.025743304s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-929366 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-295620 -n no-preload-295620
E0717 22:15:33.215887   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-295620 -n no-preload-295620: exit status 7 (59.567417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-295620 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (317.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-295620 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 22:15:33.376179   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:15:33.697033   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:15:34.338025   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:15:35.618546   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-295620 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m17.442883299s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-295620 -n no-preload-295620
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (317.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-012911 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [38183bf1-4d71-4c4b-9679-a53a476a8db4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 22:15:38.179642   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
helpers_test.go:344: "busybox" [38183bf1-4d71-4c4b-9679-a53a476a8db4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.01171037s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-012911 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-459036 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f23f91b-bb55-4920-b9e8-089df7011389] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0f23f91b-bb55-4920-b9e8-089df7011389] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.013384873s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-459036 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-929366 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-929366 describe deploy/metrics-server -n kube-system
E0717 22:15:43.299953   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-929366 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-929366 --alsologtostderr -v=3: (11.883713371s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-012911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-012911 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-012911 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-012911 --alsologtostderr -v=3: (11.990036062s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-459036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-459036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020797865s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-459036 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-459036 --alsologtostderr -v=3
E0717 22:15:53.540996   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-459036 --alsologtostderr -v=3: (11.850970692s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-929366 -n embed-certs-929366
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-929366 -n embed-certs-929366: exit status 7 (57.203067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-929366 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (333.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-929366 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-929366 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m32.583057362s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-929366 -n embed-certs-929366
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (333.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012911 -n old-k8s-version-012911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012911 -n old-k8s-version-012911: exit status 7 (84.23178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-012911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (668.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-012911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-012911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m8.451170286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012911 -n old-k8s-version-012911
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (668.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036: exit status 7 (70.710702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-459036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (593.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-459036 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 22:16:03.666977   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:03.672256   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:03.683137   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:03.703430   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:03.743693   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:03.824060   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:03.984770   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:04.305621   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:04.946515   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:06.227091   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:08.788079   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:10.735085   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 22:16:13.908921   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:14.021187   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:16:24.149623   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:44.630739   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:16:54.981602   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:16:56.536405   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/functional-773235/client.crt: no such file or directory
E0717 22:17:08.027646   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:08.032892   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:08.043146   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:08.063412   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:08.103656   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:08.183857   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:08.344275   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:08.664845   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:09.305832   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:10.586415   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:13.146832   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:18.267217   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:22.152795   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:22.158055   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:22.168292   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:22.188562   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:22.228854   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:22.309136   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:22.469528   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:22.790076   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:23.430949   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:24.712048   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:25.591004   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:17:27.272967   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:28.507926   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:32.393845   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:42.634142   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:17:48.988341   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:17:54.960598   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:54.965840   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:54.976073   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:54.996341   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:55.036620   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:55.116927   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:55.277291   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:55.597842   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:56.238772   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:17:57.519181   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:18:00.080081   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:18:03.114913   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:18:05.200822   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:18:15.441011   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:18:16.902485   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:18:28.798268   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:28.803514   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:28.813745   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:28.833972   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:28.874176   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:28.954491   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:29.114879   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:29.435437   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:29.948653   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:18:30.075908   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:31.356661   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:33.916980   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:34.294510   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:34.299748   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:34.309990   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:34.330217   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:34.370443   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:34.450719   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:34.611049   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:34.931849   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:35.572881   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:35.921277   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:18:36.853397   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:39.037893   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:39.413643   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:44.076071   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:18:44.534141   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:18:47.511912   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
E0717 22:18:49.278215   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:18:54.775140   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:19:09.758703   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:19:13.781973   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
E0717 22:19:15.255432   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:19:16.881612   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
E0717 22:19:31.969032   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/ingress-addon-legacy-021291/client.crt: no such file or directory
E0717 22:19:50.719961   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/bridge-871101/client.crt: no such file or directory
E0717 22:19:51.868826   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/calico-871101/client.crt: no such file or directory
E0717 22:19:56.216057   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
E0717 22:20:05.996386   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/custom-flannel-871101/client.crt: no such file or directory
E0717 22:20:33.058745   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
E0717 22:20:38.802085   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/enable-default-cni-871101/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-459036 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (9m53.666163684s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (593.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9f4nx" [a48e798b-11b0-4020-8b48-8a78b3292b52] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9f4nx" [a48e798b-11b0-4020-8b48-8a78b3292b52] Running
E0717 22:21:00.743670   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/auto-871101/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.014414498s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9f4nx" [a48e798b-11b0-4020-8b48-8a78b3292b52] Running
E0717 22:21:03.667693   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006858973s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-295620 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-295620 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-295620 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-295620 -n no-preload-295620
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-295620 -n no-preload-295620: exit status 2 (288.015227ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-295620 -n no-preload-295620
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-295620 -n no-preload-295620: exit status 2 (276.861331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-295620 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-295620 -n no-preload-295620
E0717 22:21:10.734780   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/addons-767732/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-295620 -n no-preload-295620
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-535784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 22:21:18.136203   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/flannel-871101/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-535784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (33.566773682s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4wrmf" [b37cf571-50ce-4f7a-8b59-bc9d5a869e62] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 22:21:31.352146   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4wrmf" [b37cf571-50ce-4f7a-8b59-bc9d5a869e62] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.019275396s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4wrmf" [b37cf571-50ce-4f7a-8b59-bc9d5a869e62] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00711681s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-929366 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-535784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-535784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.226989781s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-929366 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-929366 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-929366 -n embed-certs-929366
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-929366 -n embed-certs-929366: exit status 2 (391.723783ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-929366 -n embed-certs-929366
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-929366 -n embed-certs-929366: exit status 2 (359.821778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-929366 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-929366 -n embed-certs-929366
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-929366 -n embed-certs-929366
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-535784 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-535784 --alsologtostderr -v=3: (1.221309154s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-535784 -n newest-cni-535784
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-535784 -n newest-cni-535784: exit status 7 (81.909444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-535784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-535784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-535784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (40.023669464s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-535784 -n newest-cni-535784
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-535784 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-535784 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-535784 -n newest-cni-535784
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-535784 -n newest-cni-535784: exit status 2 (304.696993ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-535784 -n newest-cni-535784
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-535784 -n newest-cni-535784: exit status 2 (305.56936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-535784 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-535784 -n newest-cni-535784
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-535784 -n newest-cni-535784
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-zbfhs" [ced5e14b-a9b0-4cc7-91cd-684e268b2d66] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014457211s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-zbfhs" [ced5e14b-a9b0-4cc7-91cd-684e268b2d66] Running
E0717 22:26:03.667743   13163 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6342/.minikube/profiles/kindnet-871101/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00609297s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-459036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-459036 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-459036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036: exit status 2 (265.357701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036: exit status 2 (268.894776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-459036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-459036 -n default-k8s-diff-port-459036
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cf9cm" [840a9a6d-da31-4d0f-b347-1d8b65d5da31] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012240535s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cf9cm" [840a9a6d-da31-4d0f-b347-1d8b65d5da31] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006041587s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-012911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-012911 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-012911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012911 -n old-k8s-version-012911
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012911 -n old-k8s-version-012911: exit status 2 (261.615569ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012911 -n old-k8s-version-012911
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012911 -n old-k8s-version-012911: exit status 2 (263.02082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-012911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012911 -n old-k8s-version-012911
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012911 -n old-k8s-version-012911
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                    

Test skip (23/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-871101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-871101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-871101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-871101"

                                                
                                                
----------------------- debugLogs end: kubenet-871101 [took: 3.014226638s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-871101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-871101
--- SKIP: TestNetworkPlugins/group/kubenet (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-871101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-871101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-871101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-871101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871101"

                                                
                                                
----------------------- debugLogs end: cilium-871101 [took: 3.150672233s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-871101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-871101
--- SKIP: TestNetworkPlugins/group/cilium (3.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-941614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-941614
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
Copied to clipboard