Test Report: Docker_Linux_containerd 15452

                    
                      47b5190e7089ef98725bc5305118890b53aa2799:2023-07-10:30078
                    
                

Test fail (1/30)

Order failed test Duration
42 TestDockerEnvContainerd 299.437
x
+
TestDockerEnvContainerd (299.437s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-588031 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-588031 --driver=docker  --container-runtime=containerd: (20.657438806s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-588031"
docker_test.go:189: (dbg) Non-zero exit: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-588031": exit status 10 (250.554304ms)

                                                
                                                
-- stdout --
	false exit code 10

                                                
                                                
-- /stdout --
** stderr ** 
	! Using the docker-env command with the containerd runtime is a highly experimental feature, please provide feedback or contribute to make it better
	X Exiting due to MK_START_NERDCTLD: Failed setting permission for nerdctl: 
	** stderr ** 
	chmod: cannot access '/usr/local/bin/nerdctl': No such file or directory
	chmod: cannot access '/usr/local/bin/nerdctld': No such file or directory
	
	** /stderr **: sudo chmod 777 /usr/local/bin/nerdctl /usr/local/bin/nerdctld: Process exited with status 1
	stdout:
	
	stderr:
	chmod: cannot access '/usr/local/bin/nerdctl': No such file or directory
	chmod: cannot access '/usr/local/bin/nerdctld': No such file or directory
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_docker-env_0286061359b7d88e1c575f824495f60db2866fdd_0.log              │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:191: failed to execute minikube docker-env --ssh-host --ssh-add, error: exit status 10, output: 
-- stdout --
	false exit code 10

                                                
                                                
-- /stdout --
** stderr ** 
	! Using the docker-env command with the containerd runtime is a highly experimental feature, please provide feedback or contribute to make it better
	X Exiting due to MK_START_NERDCTLD: Failed setting permission for nerdctl: 
	** stderr ** 
	chmod: cannot access '/usr/local/bin/nerdctl': No such file or directory
	chmod: cannot access '/usr/local/bin/nerdctld': No such file or directory
	
	** /stderr **: sudo chmod 777 /usr/local/bin/nerdctl /usr/local/bin/nerdctld: Process exited with status 1
	stdout:
	
	stderr:
	chmod: cannot access '/usr/local/bin/nerdctl': No such file or directory
	chmod: cannot access '/usr/local/bin/nerdctld': No such file or directory
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_docker-env_0286061359b7d88e1c575f824495f60db2866fdd_0.log              │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:197: DOCKER_HOST doesn't match expected format, output is 
-- stdout --
	false exit code 10

                                                
                                                
-- /stdout --
** stderr ** 
	! Using the docker-env command with the containerd runtime is a highly experimental feature, please provide feedback or contribute to make it better
	X Exiting due to MK_START_NERDCTLD: Failed setting permission for nerdctl: 
	** stderr ** 
	chmod: cannot access '/usr/local/bin/nerdctl': No such file or directory
	chmod: cannot access '/usr/local/bin/nerdctld': No such file or directory
	
	** /stderr **: sudo chmod 777 /usr/local/bin/nerdctl /usr/local/bin/nerdctld: Process exited with status 1
	stdout:
	
	stderr:
	chmod: cannot access '/usr/local/bin/nerdctl': No such file or directory
	chmod: cannot access '/usr/local/bin/nerdctld': No such file or directory
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_docker-env_0286061359b7d88e1c575f824495f60db2866fdd_0.log              │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:113: *** TestDockerEnvContainerd FAILED at 2023-07-10 23:56:25.750425895 +0000 UTC m=+295.461124222
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect dockerenv-588031
helpers_test.go:235: (dbg) docker inspect dockerenv-588031:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1ba4511386e8b44f032719a2bad8d90ef0a8f34ea022d8d1941a984f4b01df6",
	        "Created": "2023-07-10T23:56:00.564274855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 32942,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-10T23:56:00.87029168Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:200657b779b56504580f087762ce29a36bd21d983dabf4b13192d008a5938235",
	        "ResolvConfPath": "/var/lib/docker/containers/f1ba4511386e8b44f032719a2bad8d90ef0a8f34ea022d8d1941a984f4b01df6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1ba4511386e8b44f032719a2bad8d90ef0a8f34ea022d8d1941a984f4b01df6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1ba4511386e8b44f032719a2bad8d90ef0a8f34ea022d8d1941a984f4b01df6/hosts",
	        "LogPath": "/var/lib/docker/containers/f1ba4511386e8b44f032719a2bad8d90ef0a8f34ea022d8d1941a984f4b01df6/f1ba4511386e8b44f032719a2bad8d90ef0a8f34ea022d8d1941a984f4b01df6-json.log",
	        "Name": "/dockerenv-588031",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-588031:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-588031",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/24c8cab1705658b5acea6610f859aeaa1b8a257098b0f4456b161e900aa580ff-init/diff:/var/lib/docker/overlay2/05a861e2dff4c37c3d884f7f5158507c0e25578cb5b47b11138baa05acd9229d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24c8cab1705658b5acea6610f859aeaa1b8a257098b0f4456b161e900aa580ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24c8cab1705658b5acea6610f859aeaa1b8a257098b0f4456b161e900aa580ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24c8cab1705658b5acea6610f859aeaa1b8a257098b0f4456b161e900aa580ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-588031",
	                "Source": "/var/lib/docker/volumes/dockerenv-588031/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-588031",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-588031",
	                "name.minikube.sigs.k8s.io": "dockerenv-588031",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "993347c4d3cb8afeb5cbb9f42ca2cf652b7208af2d3f271333ebca8a1a3e782d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32777"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32776"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32773"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32774"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/993347c4d3cb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-588031": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f1ba4511386e",
	                        "dockerenv-588031"
	                    ],
	                    "NetworkID": "c9cf3301f5c3898acb2484959abb661d1ef1de7544422e6f6dddb08dbe589287",
	                    "EndpointID": "a43c6a268d79abeb7a44d1288e108761f86131c0036c1fb8d04d0f062a5e7206",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-588031 -n dockerenv-588031
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-588031 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p dockerenv-588031 logs -n 25: (1.394553837s)
helpers_test.go:252: TestDockerEnvContainerd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete     | -p download-docker-631311      | download-docker-631311 | jenkins | v1.30.1 | 10 Jul 23 23:52 UTC | 10 Jul 23 23:52 UTC |
	| start      | --download-only -p             | binary-mirror-045500   | jenkins | v1.30.1 | 10 Jul 23 23:52 UTC |                     |
	|            | binary-mirror-045500           |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --binary-mirror                |                        |         |         |                     |                     |
	|            | http://127.0.0.1:46211         |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete     | -p binary-mirror-045500        | binary-mirror-045500   | jenkins | v1.30.1 | 10 Jul 23 23:52 UTC | 10 Jul 23 23:52 UTC |
	| start      | -p addons-435442               | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:52 UTC | 10 Jul 23 23:54 UTC |
	|            | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --addons=registry              |                        |         |         |                     |                     |
	|            | --addons=metrics-server        |                        |         |         |                     |                     |
	|            | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|            | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|            | --addons=gcp-auth              |                        |         |         |                     |                     |
	|            | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|            | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	|            | --addons=ingress               |                        |         |         |                     |                     |
	|            | --addons=ingress-dns           |                        |         |         |                     |                     |
	|            | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons     | disable cloud-spanner -p       | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | addons-435442                  |                        |         |         |                     |                     |
	| addons     | addons-435442 addons           | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | disable metrics-server         |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | disable inspektor-gadget -p    | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:55 UTC |
	|            | addons-435442                  |                        |         |         |                     |                     |
	| addons     | addons-435442 addons disable   | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| ip         | addons-435442 ip               | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	| addons     | addons-435442 addons disable   | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | registry --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | enable headlamp                | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | -p addons-435442               |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ssh        | addons-435442 ssh curl -s      | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|            | nginx.example.com'             |                        |         |         |                     |                     |
	| ip         | addons-435442 ip               | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	| addons     | addons-435442 addons disable   | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-435442 addons disable   | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:54 UTC | 10 Jul 23 23:54 UTC |
	|            | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons     | addons-435442 addons           | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	|            | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-435442 addons           | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	|            | disable volumesnapshots        |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-435442 addons disable   | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	|            | gcp-auth --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| stop       | -p addons-435442               | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	| addons     | enable dashboard -p            | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	|            | addons-435442                  |                        |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	|            | addons-435442                  |                        |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	|            | addons-435442                  |                        |         |         |                     |                     |
	| delete     | -p addons-435442               | addons-435442          | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:55 UTC |
	| start      | -p dockerenv-588031            | dockerenv-588031       | jenkins | v1.30.1 | 10 Jul 23 23:55 UTC | 10 Jul 23 23:56 UTC |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-588031       | jenkins | v1.30.1 | 10 Jul 23 23:56 UTC |                     |
	|            | dockerenv-588031               |                        |         |         |                     |                     |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/10 23:55:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0710 23:55:54.877359   32341 out.go:296] Setting OutFile to fd 1 ...
	I0710 23:55:54.877458   32341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0710 23:55:54.877461   32341 out.go:309] Setting ErrFile to fd 2...
	I0710 23:55:54.877465   32341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0710 23:55:54.877587   32341 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-8191/.minikube/bin
	I0710 23:55:54.878135   32341 out.go:303] Setting JSON to false
	I0710 23:55:54.879393   32341 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2303,"bootTime":1689031052,"procs":484,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0710 23:55:54.879443   32341 start.go:137] virtualization: kvm guest
	I0710 23:55:54.882067   32341 out.go:177] * [dockerenv-588031] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0710 23:55:54.883597   32341 notify.go:220] Checking for updates...
	I0710 23:55:54.883601   32341 out.go:177]   - MINIKUBE_LOCATION=15452
	I0710 23:55:54.885058   32341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0710 23:55:54.886503   32341 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15452-8191/kubeconfig
	I0710 23:55:54.887864   32341 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-8191/.minikube
	I0710 23:55:54.889288   32341 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0710 23:55:54.890813   32341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0710 23:55:54.892473   32341 driver.go:373] Setting default libvirt URI to qemu:///system
	I0710 23:55:54.916142   32341 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0710 23:55:54.916228   32341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0710 23:55:54.968548   32341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-10 23:55:54.96037224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0710 23:55:54.968629   32341 docker.go:294] overlay module found
	I0710 23:55:54.970649   32341 out.go:177] * Using the docker driver based on user configuration
	I0710 23:55:54.971912   32341 start.go:297] selected driver: docker
	I0710 23:55:54.971917   32341 start.go:944] validating driver "docker" against <nil>
	I0710 23:55:54.971925   32341 start.go:955] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0710 23:55:54.971999   32341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0710 23:55:55.027671   32341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-10 23:55:55.019391512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0710 23:55:55.027795   32341 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0710 23:55:55.028227   32341 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0710 23:55:55.028377   32341 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0710 23:55:55.030120   32341 out.go:177] * Using Docker driver with root privileges
	I0710 23:55:55.031381   32341 cni.go:84] Creating CNI manager for ""
	I0710 23:55:55.031392   32341 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0710 23:55:55.031403   32341 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0710 23:55:55.031414   32341 start_flags.go:319] config:
	{Name:dockerenv-588031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-588031 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0710 23:55:55.033158   32341 out.go:177] * Starting control plane node dockerenv-588031 in cluster dockerenv-588031
	I0710 23:55:55.034632   32341 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0710 23:55:55.035958   32341 out.go:177] * Pulling base image ...
	I0710 23:55:55.037263   32341 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0710 23:55:55.037292   32341 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0710 23:55:55.037297   32341 cache.go:57] Caching tarball of preloaded images
	I0710 23:55:55.037354   32341 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 in local docker daemon
	I0710 23:55:55.037361   32341 preload.go:174] Found /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0710 23:55:55.037369   32341 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0710 23:55:55.037657   32341 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/config.json ...
	I0710 23:55:55.037676   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/config.json: {Name:mk7f9fdc2c794c9ea7e1aef9bc979d0d16244ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:55:55.054323   32341 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 in local docker daemon, skipping pull
	I0710 23:55:55.054333   32341 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 exists in daemon, skipping load
	I0710 23:55:55.054359   32341 cache.go:195] Successfully downloaded all kic artifacts
	I0710 23:55:55.054407   32341 start.go:365] acquiring machines lock for dockerenv-588031: {Name:mk98e11ab8fce477b1d098bda2cd0bce2062d2e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0710 23:55:55.054520   32341 start.go:369] acquired machines lock for "dockerenv-588031" in 101.584µs
	I0710 23:55:55.054537   32341 start.go:93] Provisioning new machine with config: &{Name:dockerenv-588031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-588031 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0710 23:55:55.054607   32341 start.go:125] createHost starting for "" (driver="docker")
	I0710 23:55:55.056509   32341 out.go:204] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0710 23:55:55.056691   32341 start.go:159] libmachine.API.Create for "dockerenv-588031" (driver="docker")
	I0710 23:55:55.056707   32341 client.go:168] LocalClient.Create starting
	I0710 23:55:55.056770   32341 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca.pem
	I0710 23:55:55.056795   32341 main.go:141] libmachine: Decoding PEM data...
	I0710 23:55:55.056807   32341 main.go:141] libmachine: Parsing certificate...
	I0710 23:55:55.056855   32341 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15452-8191/.minikube/certs/cert.pem
	I0710 23:55:55.056869   32341 main.go:141] libmachine: Decoding PEM data...
	I0710 23:55:55.056876   32341 main.go:141] libmachine: Parsing certificate...
	I0710 23:55:55.057169   32341 cli_runner.go:164] Run: docker network inspect dockerenv-588031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0710 23:55:55.072351   32341 cli_runner.go:211] docker network inspect dockerenv-588031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0710 23:55:55.072402   32341 network_create.go:281] running [docker network inspect dockerenv-588031] to gather additional debugging logs...
	I0710 23:55:55.072415   32341 cli_runner.go:164] Run: docker network inspect dockerenv-588031
	W0710 23:55:55.086931   32341 cli_runner.go:211] docker network inspect dockerenv-588031 returned with exit code 1
	I0710 23:55:55.086946   32341 network_create.go:284] error running [docker network inspect dockerenv-588031]: docker network inspect dockerenv-588031: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-588031 not found
	I0710 23:55:55.086958   32341 network_create.go:286] output of [docker network inspect dockerenv-588031]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-588031 not found
	
	** /stderr **
	I0710 23:55:55.086996   32341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0710 23:55:55.101769   32341 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010ca970}
	I0710 23:55:55.101800   32341 network_create.go:123] attempt to create docker network dockerenv-588031 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0710 23:55:55.101836   32341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-588031 dockerenv-588031
	I0710 23:55:55.151211   32341 network_create.go:107] docker network dockerenv-588031 192.168.49.0/24 created
	I0710 23:55:55.151230   32341 kic.go:117] calculated static IP "192.168.49.2" for the "dockerenv-588031" container
	I0710 23:55:55.151280   32341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0710 23:55:55.165547   32341 cli_runner.go:164] Run: docker volume create dockerenv-588031 --label name.minikube.sigs.k8s.io=dockerenv-588031 --label created_by.minikube.sigs.k8s.io=true
	I0710 23:55:55.182267   32341 oci.go:103] Successfully created a docker volume dockerenv-588031
	I0710 23:55:55.182320   32341 cli_runner.go:164] Run: docker run --rm --name dockerenv-588031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-588031 --entrypoint /usr/bin/test -v dockerenv-588031:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 -d /var/lib
	I0710 23:55:55.705785   32341 oci.go:107] Successfully prepared a docker volume dockerenv-588031
	I0710 23:55:55.705814   32341 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0710 23:55:55.705832   32341 kic.go:190] Starting extracting preloaded images to volume ...
	I0710 23:55:55.705884   32341 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-588031:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0710 23:56:00.497383   32341 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-588031:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.79143231s)
	I0710 23:56:00.497403   32341 kic.go:199] duration metric: took 4.791568 seconds to extract preloaded images to volume
	W0710 23:56:00.497700   32341 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0710 23:56:00.497783   32341 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0710 23:56:00.549450   32341 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-588031 --name dockerenv-588031 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-588031 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-588031 --network dockerenv-588031 --ip 192.168.49.2 --volume dockerenv-588031:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5
	I0710 23:56:00.878519   32341 cli_runner.go:164] Run: docker container inspect dockerenv-588031 --format={{.State.Running}}
	I0710 23:56:00.896781   32341 cli_runner.go:164] Run: docker container inspect dockerenv-588031 --format={{.State.Status}}
	I0710 23:56:00.913820   32341 cli_runner.go:164] Run: docker exec dockerenv-588031 stat /var/lib/dpkg/alternatives/iptables
	I0710 23:56:00.955465   32341 oci.go:144] the created container "dockerenv-588031" has a running status.
	I0710 23:56:00.955486   32341 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa...
	I0710 23:56:01.156569   32341 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0710 23:56:01.175519   32341 cli_runner.go:164] Run: docker container inspect dockerenv-588031 --format={{.State.Status}}
	I0710 23:56:01.193936   32341 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0710 23:56:01.193947   32341 kic_runner.go:114] Args: [docker exec --privileged dockerenv-588031 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0710 23:56:01.265037   32341 cli_runner.go:164] Run: docker container inspect dockerenv-588031 --format={{.State.Status}}
	I0710 23:56:01.293089   32341 machine.go:88] provisioning docker machine ...
	I0710 23:56:01.293110   32341 ubuntu.go:169] provisioning hostname "dockerenv-588031"
	I0710 23:56:01.293169   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:01.309011   32341 main.go:141] libmachine: Using SSH client type: native
	I0710 23:56:01.309437   32341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32777 <nil> <nil>}
	I0710 23:56:01.309446   32341 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-588031 && echo "dockerenv-588031" | sudo tee /etc/hostname
	I0710 23:56:01.530855   32341 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-588031
	
	I0710 23:56:01.530920   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:01.554648   32341 main.go:141] libmachine: Using SSH client type: native
	I0710 23:56:01.555313   32341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32777 <nil> <nil>}
	I0710 23:56:01.555336   32341 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-588031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-588031/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-588031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0710 23:56:01.690774   32341 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0710 23:56:01.690791   32341 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15452-8191/.minikube CaCertPath:/home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15452-8191/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15452-8191/.minikube}
	I0710 23:56:01.690823   32341 ubuntu.go:177] setting up certificates
	I0710 23:56:01.690831   32341 provision.go:83] configureAuth start
	I0710 23:56:01.690892   32341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-588031
	I0710 23:56:01.706769   32341 provision.go:138] copyHostCerts
	I0710 23:56:01.706826   32341 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-8191/.minikube/ca.pem, removing ...
	I0710 23:56:01.706832   32341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-8191/.minikube/ca.pem
	I0710 23:56:01.706891   32341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15452-8191/.minikube/ca.pem (1082 bytes)
	I0710 23:56:01.706966   32341 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-8191/.minikube/cert.pem, removing ...
	I0710 23:56:01.706969   32341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-8191/.minikube/cert.pem
	I0710 23:56:01.706989   32341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-8191/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15452-8191/.minikube/cert.pem (1123 bytes)
	I0710 23:56:01.707037   32341 exec_runner.go:144] found /home/jenkins/minikube-integration/15452-8191/.minikube/key.pem, removing ...
	I0710 23:56:01.707040   32341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15452-8191/.minikube/key.pem
	I0710 23:56:01.707057   32341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15452-8191/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15452-8191/.minikube/key.pem (1675 bytes)
	I0710 23:56:01.707093   32341 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15452-8191/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca-key.pem org=jenkins.dockerenv-588031 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube dockerenv-588031]
	I0710 23:56:01.968768   32341 provision.go:172] copyRemoteCerts
	I0710 23:56:01.968814   32341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0710 23:56:01.968843   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:01.984554   32341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa Username:docker}
	I0710 23:56:02.075032   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0710 23:56:02.094980   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0710 23:56:02.114457   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0710 23:56:02.134231   32341 provision.go:86] duration metric: configureAuth took 443.389438ms
	I0710 23:56:02.134251   32341 ubuntu.go:193] setting minikube options for container-runtime
	I0710 23:56:02.134428   32341 config.go:182] Loaded profile config "dockerenv-588031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0710 23:56:02.134433   32341 machine.go:91] provisioned docker machine in 841.335728ms
	I0710 23:56:02.134438   32341 client.go:171] LocalClient.Create took 7.077727988s
	I0710 23:56:02.134455   32341 start.go:167] duration metric: libmachine.API.Create for "dockerenv-588031" took 7.077765853s
	I0710 23:56:02.134460   32341 start.go:300] post-start starting for "dockerenv-588031" (driver="docker")
	I0710 23:56:02.134467   32341 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0710 23:56:02.134511   32341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0710 23:56:02.134543   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:02.151208   32341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa Username:docker}
	I0710 23:56:02.243270   32341 ssh_runner.go:195] Run: cat /etc/os-release
	I0710 23:56:02.246106   32341 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0710 23:56:02.246136   32341 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0710 23:56:02.246145   32341 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0710 23:56:02.246149   32341 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0710 23:56:02.246156   32341 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-8191/.minikube/addons for local assets ...
	I0710 23:56:02.246206   32341 filesync.go:126] Scanning /home/jenkins/minikube-integration/15452-8191/.minikube/files for local assets ...
	I0710 23:56:02.246220   32341 start.go:303] post-start completed in 111.756034ms
	I0710 23:56:02.246504   32341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-588031
	I0710 23:56:02.262116   32341 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/config.json ...
	I0710 23:56:02.262321   32341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0710 23:56:02.262351   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:02.277579   32341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa Username:docker}
	I0710 23:56:02.371617   32341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0710 23:56:02.375384   32341 start.go:128] duration metric: createHost completed in 7.320766832s
	I0710 23:56:02.375401   32341 start.go:83] releasing machines lock for "dockerenv-588031", held for 7.320873941s
	I0710 23:56:02.375464   32341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-588031
	I0710 23:56:02.391644   32341 ssh_runner.go:195] Run: cat /version.json
	I0710 23:56:02.391681   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:02.391722   32341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0710 23:56:02.391766   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:02.408395   32341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa Username:docker}
	I0710 23:56:02.408620   32341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa Username:docker}
	I0710 23:56:02.580721   32341 ssh_runner.go:195] Run: systemctl --version
	I0710 23:56:02.584693   32341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0710 23:56:02.588371   32341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0710 23:56:02.609438   32341 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0710 23:56:02.609496   32341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0710 23:56:02.633645   32341 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0710 23:56:02.633659   32341 start.go:466] detecting cgroup driver to use...
	I0710 23:56:02.633687   32341 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0710 23:56:02.633726   32341 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0710 23:56:02.643930   32341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0710 23:56:02.653287   32341 docker.go:196] disabling cri-docker service (if available) ...
	I0710 23:56:02.653323   32341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0710 23:56:02.664624   32341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0710 23:56:02.676376   32341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0710 23:56:02.747400   32341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0710 23:56:02.831316   32341 docker.go:212] disabling docker service ...
	I0710 23:56:02.831366   32341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0710 23:56:02.849285   32341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0710 23:56:02.859186   32341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0710 23:56:02.934876   32341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0710 23:56:03.006821   32341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0710 23:56:03.016519   32341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0710 23:56:03.029806   32341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0710 23:56:03.037762   32341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0710 23:56:03.046162   32341 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0710 23:56:03.046210   32341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0710 23:56:03.054495   32341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0710 23:56:03.062459   32341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0710 23:56:03.070683   32341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0710 23:56:03.078974   32341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0710 23:56:03.086642   32341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0710 23:56:03.094839   32341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0710 23:56:03.101660   32341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0710 23:56:03.108768   32341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0710 23:56:03.182719   32341 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0710 23:56:03.250031   32341 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0710 23:56:03.250092   32341 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0710 23:56:03.253436   32341 start.go:534] Will wait 60s for crictl version
	I0710 23:56:03.253472   32341 ssh_runner.go:195] Run: which crictl
	I0710 23:56:03.256775   32341 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0710 23:56:03.288637   32341 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0710 23:56:03.288684   32341 ssh_runner.go:195] Run: containerd --version
	I0710 23:56:03.310782   32341 ssh_runner.go:195] Run: containerd --version
	I0710 23:56:03.336795   32341 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0710 23:56:03.338257   32341 cli_runner.go:164] Run: docker network inspect dockerenv-588031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0710 23:56:03.353963   32341 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0710 23:56:03.357358   32341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0710 23:56:03.366909   32341 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0710 23:56:03.366950   32341 ssh_runner.go:195] Run: sudo crictl images --output json
	I0710 23:56:03.396144   32341 containerd.go:604] all images are preloaded for containerd runtime.
	I0710 23:56:03.396154   32341 containerd.go:518] Images already preloaded, skipping extraction
	I0710 23:56:03.396194   32341 ssh_runner.go:195] Run: sudo crictl images --output json
	I0710 23:56:03.425190   32341 containerd.go:604] all images are preloaded for containerd runtime.
	I0710 23:56:03.425200   32341 cache_images.go:84] Images are preloaded, skipping loading
	I0710 23:56:03.425239   32341 ssh_runner.go:195] Run: sudo crictl info
	I0710 23:56:03.455698   32341 cni.go:84] Creating CNI manager for ""
	I0710 23:56:03.455710   32341 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0710 23:56:03.455720   32341 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0710 23:56:03.455735   32341 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-588031 NodeName:dockerenv-588031 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0710 23:56:03.455841   32341 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-588031"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0710 23:56:03.455896   32341 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=dockerenv-588031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:dockerenv-588031 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0710 23:56:03.455938   32341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0710 23:56:03.464006   32341 binaries.go:44] Found k8s binaries, skipping transfer
	I0710 23:56:03.464057   32341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0710 23:56:03.471388   32341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0710 23:56:03.486109   32341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0710 23:56:03.501039   32341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0710 23:56:03.515314   32341 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0710 23:56:03.518125   32341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0710 23:56:03.526852   32341 certs.go:56] Setting up /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031 for IP: 192.168.49.2
	I0710 23:56:03.526866   32341 certs.go:190] acquiring lock for shared ca certs: {Name:mk33250f2a673af32b971ef356a8a0a8e5429398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:03.526980   32341 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15452-8191/.minikube/ca.key
	I0710 23:56:03.527012   32341 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15452-8191/.minikube/proxy-client-ca.key
	I0710 23:56:03.527044   32341 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/client.key
	I0710 23:56:03.527058   32341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/client.crt with IP's: []
	I0710 23:56:03.657099   32341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/client.crt ...
	I0710 23:56:03.657112   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/client.crt: {Name:mke9fd9dd75e476ea66e1bd9a68d509aeba9f67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:03.657273   32341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/client.key ...
	I0710 23:56:03.657278   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/client.key: {Name:mk439274429fc389a566693f2cca802d71c3b6a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:03.657344   32341 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.key.dd3b5fb2
	I0710 23:56:03.657352   32341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0710 23:56:03.839308   32341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.crt.dd3b5fb2 ...
	I0710 23:56:03.839324   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.crt.dd3b5fb2: {Name:mk62b302437918a27890c5432df52ad9ec1710f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:03.839497   32341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.key.dd3b5fb2 ...
	I0710 23:56:03.839502   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.key.dd3b5fb2: {Name:mk226aa88f2d3493ad8b44c5ca6a2d605cb4118d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:03.839569   32341 certs.go:337] copying /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.crt
	I0710 23:56:03.839625   32341 certs.go:341] copying /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.key
	I0710 23:56:03.839665   32341 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.key
	I0710 23:56:03.839674   32341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.crt with IP's: []
	I0710 23:56:03.970824   32341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.crt ...
	I0710 23:56:03.970838   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.crt: {Name:mk9b367c8a43f3acbba67903aa248b5a0a832d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:03.971008   32341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.key ...
	I0710 23:56:03.971014   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.key: {Name:mkb81975e3280fac91c847d889d071bba4b1d88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:03.971190   32341 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-8191/.minikube/certs/home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca-key.pem (1679 bytes)
	I0710 23:56:03.971219   32341 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-8191/.minikube/certs/home/jenkins/minikube-integration/15452-8191/.minikube/certs/ca.pem (1082 bytes)
	I0710 23:56:03.971238   32341 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-8191/.minikube/certs/home/jenkins/minikube-integration/15452-8191/.minikube/certs/cert.pem (1123 bytes)
	I0710 23:56:03.971255   32341 certs.go:437] found cert: /home/jenkins/minikube-integration/15452-8191/.minikube/certs/home/jenkins/minikube-integration/15452-8191/.minikube/certs/key.pem (1675 bytes)
	I0710 23:56:03.971748   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0710 23:56:03.992599   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0710 23:56:04.012804   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0710 23:56:04.032471   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/dockerenv-588031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0710 23:56:04.051593   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0710 23:56:04.071435   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0710 23:56:04.091283   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0710 23:56:04.110707   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0710 23:56:04.130537   32341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15452-8191/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0710 23:56:04.150321   32341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0710 23:56:04.165566   32341 ssh_runner.go:195] Run: openssl version
	I0710 23:56:04.170315   32341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0710 23:56:04.178221   32341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0710 23:56:04.181197   32341 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I0710 23:56:04.181235   32341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0710 23:56:04.186994   32341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0710 23:56:04.194647   32341 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0710 23:56:04.197325   32341 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0710 23:56:04.197362   32341 kubeadm.go:404] StartCluster: {Name:dockerenv-588031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-588031 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0710 23:56:04.197451   32341 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0710 23:56:04.197482   32341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0710 23:56:04.228445   32341 cri.go:89] found id: ""
	I0710 23:56:04.228500   32341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0710 23:56:04.236115   32341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0710 23:56:04.243374   32341 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0710 23:56:04.243408   32341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0710 23:56:04.250450   32341 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0710 23:56:04.250477   32341 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0710 23:56:04.294026   32341 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0710 23:56:04.294209   32341 kubeadm.go:322] [preflight] Running pre-flight checks
	I0710 23:56:04.328493   32341 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0710 23:56:04.328581   32341 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0710 23:56:04.328617   32341 kubeadm.go:322] OS: Linux
	I0710 23:56:04.328684   32341 kubeadm.go:322] CGROUPS_CPU: enabled
	I0710 23:56:04.328748   32341 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0710 23:56:04.328785   32341 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0710 23:56:04.328821   32341 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0710 23:56:04.328858   32341 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0710 23:56:04.328903   32341 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0710 23:56:04.328938   32341 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0710 23:56:04.328975   32341 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0710 23:56:04.329027   32341 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0710 23:56:04.390408   32341 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0710 23:56:04.390515   32341 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0710 23:56:04.390645   32341 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0710 23:56:04.575402   32341 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0710 23:56:04.578103   32341 out.go:204]   - Generating certificates and keys ...
	I0710 23:56:04.578242   32341 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0710 23:56:04.578335   32341 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0710 23:56:04.843708   32341 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0710 23:56:05.045868   32341 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0710 23:56:05.105341   32341 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0710 23:56:05.318898   32341 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0710 23:56:05.435698   32341 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0710 23:56:05.435815   32341 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [dockerenv-588031 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0710 23:56:05.596407   32341 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0710 23:56:05.596517   32341 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-588031 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0710 23:56:05.722586   32341 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0710 23:56:05.850001   32341 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0710 23:56:05.966890   32341 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0710 23:56:05.966975   32341 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0710 23:56:06.229785   32341 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0710 23:56:06.367984   32341 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0710 23:56:07.019914   32341 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0710 23:56:07.111725   32341 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0710 23:56:07.122630   32341 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0710 23:56:07.123308   32341 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0710 23:56:07.123363   32341 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0710 23:56:07.194376   32341 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0710 23:56:07.196399   32341 out.go:204]   - Booting up control plane ...
	I0710 23:56:07.196521   32341 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0710 23:56:07.197603   32341 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0710 23:56:07.199152   32341 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0710 23:56:07.200068   32341 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0710 23:56:07.202354   32341 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0710 23:56:12.204558   32341 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002170 seconds
	I0710 23:56:12.204704   32341 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0710 23:56:12.216955   32341 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0710 23:56:12.734847   32341 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0710 23:56:12.735014   32341 kubeadm.go:322] [mark-control-plane] Marking the node dockerenv-588031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0710 23:56:13.242949   32341 kubeadm.go:322] [bootstrap-token] Using token: 9ydibs.2cjuhgwotsmf65bw
	I0710 23:56:13.244651   32341 out.go:204]   - Configuring RBAC rules ...
	I0710 23:56:13.244797   32341 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0710 23:56:13.249211   32341 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0710 23:56:13.255472   32341 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0710 23:56:13.258821   32341 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0710 23:56:13.261359   32341 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0710 23:56:13.264019   32341 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0710 23:56:13.276330   32341 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0710 23:56:13.482355   32341 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0710 23:56:13.652943   32341 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0710 23:56:13.655005   32341 kubeadm.go:322] 
	I0710 23:56:13.655092   32341 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0710 23:56:13.655108   32341 kubeadm.go:322] 
	I0710 23:56:13.655231   32341 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0710 23:56:13.655240   32341 kubeadm.go:322] 
	I0710 23:56:13.655260   32341 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0710 23:56:13.655311   32341 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0710 23:56:13.655351   32341 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0710 23:56:13.655354   32341 kubeadm.go:322] 
	I0710 23:56:13.655396   32341 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0710 23:56:13.655399   32341 kubeadm.go:322] 
	I0710 23:56:13.655440   32341 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0710 23:56:13.655442   32341 kubeadm.go:322] 
	I0710 23:56:13.655488   32341 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0710 23:56:13.655578   32341 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0710 23:56:13.655664   32341 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0710 23:56:13.655668   32341 kubeadm.go:322] 
	I0710 23:56:13.655776   32341 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0710 23:56:13.655867   32341 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0710 23:56:13.655871   32341 kubeadm.go:322] 
	I0710 23:56:13.655976   32341 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9ydibs.2cjuhgwotsmf65bw \
	I0710 23:56:13.656147   32341 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0707b13839b5ea8b9287f53c9a21855600382727f3185c5027a0f01eee940b3c \
	I0710 23:56:13.656172   32341 kubeadm.go:322] 	--control-plane 
	I0710 23:56:13.656177   32341 kubeadm.go:322] 
	I0710 23:56:13.656272   32341 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0710 23:56:13.656276   32341 kubeadm.go:322] 
	I0710 23:56:13.656398   32341 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9ydibs.2cjuhgwotsmf65bw \
	I0710 23:56:13.656546   32341 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0707b13839b5ea8b9287f53c9a21855600382727f3185c5027a0f01eee940b3c 
	I0710 23:56:13.658480   32341 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0710 23:56:13.658638   32341 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0710 23:56:13.658654   32341 cni.go:84] Creating CNI manager for ""
	I0710 23:56:13.658666   32341 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0710 23:56:13.660631   32341 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0710 23:56:13.662086   32341 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0710 23:56:13.665566   32341 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0710 23:56:13.665580   32341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0710 23:56:13.724229   32341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0710 23:56:14.477259   32341 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0710 23:56:14.477361   32341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0710 23:56:14.477397   32341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=330d5dd0bc9f186aa250423fa2021af06dc8c810 minikube.k8s.io/name=dockerenv-588031 minikube.k8s.io/updated_at=2023_07_10T23_56_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0710 23:56:14.547408   32341 kubeadm.go:1081] duration metric: took 70.088135ms to wait for elevateKubeSystemPrivileges.
	I0710 23:56:14.547469   32341 ops.go:34] apiserver oom_adj: -16
	I0710 23:56:14.555122   32341 kubeadm.go:406] StartCluster complete in 10.357731657s
	I0710 23:56:14.555149   32341 settings.go:142] acquiring lock: {Name:mk206ba0d951a3e3b9fc67cc66af54b30d7d65f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:14.555212   32341 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15452-8191/kubeconfig
	I0710 23:56:14.555767   32341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/kubeconfig: {Name:mk28ec345b13972979535f031c52d3bd3c02ef54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:56:14.557198   32341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0710 23:56:14.557379   32341 config.go:182] Loaded profile config "dockerenv-588031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0710 23:56:14.557402   32341 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0710 23:56:14.557476   32341 addons.go:66] Setting storage-provisioner=true in profile "dockerenv-588031"
	I0710 23:56:14.557490   32341 addons.go:228] Setting addon storage-provisioner=true in "dockerenv-588031"
	I0710 23:56:14.557536   32341 host.go:66] Checking if "dockerenv-588031" exists ...
	I0710 23:56:14.557540   32341 addons.go:66] Setting default-storageclass=true in profile "dockerenv-588031"
	I0710 23:56:14.557559   32341 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-588031"
	I0710 23:56:14.557850   32341 cli_runner.go:164] Run: docker container inspect dockerenv-588031 --format={{.State.Status}}
	I0710 23:56:14.558011   32341 cli_runner.go:164] Run: docker container inspect dockerenv-588031 --format={{.State.Status}}
	I0710 23:56:14.580291   32341 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0710 23:56:14.581882   32341 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0710 23:56:14.581891   32341 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0710 23:56:14.581931   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:14.597242   32341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa Username:docker}
	I0710 23:56:14.618125   32341 addons.go:228] Setting addon default-storageclass=true in "dockerenv-588031"
	I0710 23:56:14.618165   32341 host.go:66] Checking if "dockerenv-588031" exists ...
	I0710 23:56:14.618633   32341 cli_runner.go:164] Run: docker container inspect dockerenv-588031 --format={{.State.Status}}
	I0710 23:56:14.638026   32341 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0710 23:56:14.638035   32341 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0710 23:56:14.638076   32341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-588031
	I0710 23:56:14.664754   32341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/15452-8191/.minikube/machines/dockerenv-588031/id_rsa Username:docker}
	I0710 23:56:14.719969   32341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0710 23:56:14.731976   32341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0710 23:56:14.832633   32341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0710 23:56:15.120939   32341 kapi.go:248] "coredns" deployment in "kube-system" namespace and "dockerenv-588031" context rescaled to 1 replicas
	I0710 23:56:15.120964   32341 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0710 23:56:15.123127   32341 out.go:177] * Verifying Kubernetes components...
	I0710 23:56:15.124509   32341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0710 23:56:15.238990   32341 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0710 23:56:15.420803   32341 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0710 23:56:15.422291   32341 addons.go:499] enable addons completed in 864.88109ms: enabled=[storage-provisioner default-storageclass]
	I0710 23:56:15.420013   32341 api_server.go:52] waiting for apiserver process to appear ...
	I0710 23:56:15.422365   32341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0710 23:56:15.431926   32341 api_server.go:72] duration metric: took 310.939004ms to wait for apiserver process to appear ...
	I0710 23:56:15.431936   32341 api_server.go:88] waiting for apiserver healthz status ...
	I0710 23:56:15.431952   32341 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0710 23:56:15.436727   32341 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0710 23:56:15.437979   32341 api_server.go:141] control plane version: v1.27.3
	I0710 23:56:15.437990   32341 api_server.go:131] duration metric: took 6.051369ms to wait for apiserver health ...
	I0710 23:56:15.437997   32341 system_pods.go:43] waiting for kube-system pods to appear ...
	I0710 23:56:15.443992   32341 system_pods.go:59] 5 kube-system pods found
	I0710 23:56:15.444011   32341 system_pods.go:61] "etcd-dockerenv-588031" [41834a21-fa5a-4825-be37-9fd0cc084470] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0710 23:56:15.444024   32341 system_pods.go:61] "kube-apiserver-dockerenv-588031" [ce0c5cfd-0a3e-4e89-a2b5-7f734b05dccf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0710 23:56:15.444033   32341 system_pods.go:61] "kube-controller-manager-dockerenv-588031" [1047ecf3-7118-4c96-a33a-7e8ae6383fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0710 23:56:15.444043   32341 system_pods.go:61] "kube-scheduler-dockerenv-588031" [a38c15fe-7651-487c-88dc-66fa8ca623c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0710 23:56:15.444050   32341 system_pods.go:61] "storage-provisioner" [e913fe82-6cb6-4e1c-8e25-acef3e0e0697] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0710 23:56:15.444055   32341 system_pods.go:74] duration metric: took 6.054845ms to wait for pod list to return data ...
	I0710 23:56:15.444064   32341 kubeadm.go:581] duration metric: took 323.079539ms to wait for : map[apiserver:true system_pods:true] ...
	I0710 23:56:15.444076   32341 node_conditions.go:102] verifying NodePressure condition ...
	I0710 23:56:15.446507   32341 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0710 23:56:15.446521   32341 node_conditions.go:123] node cpu capacity is 8
	I0710 23:56:15.446533   32341 node_conditions.go:105] duration metric: took 2.452335ms to run NodePressure ...
	I0710 23:56:15.446544   32341 start.go:228] waiting for startup goroutines ...
	I0710 23:56:15.446552   32341 start.go:233] waiting for cluster config update ...
	I0710 23:56:15.446566   32341 start.go:242] writing updated cluster config ...
	I0710 23:56:15.446894   32341 ssh_runner.go:195] Run: rm -f paused
	I0710 23:56:15.491269   32341 start.go:642] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0710 23:56:15.493280   32341 out.go:177] * Done! kubectl is now configured to use "dockerenv-588031" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	7dffca574c701       b0b1fa0f58c6e       Less than a second ago   Created             kindnet-cni               0                   a605afb83f46f       kindnet-n99q7
	abb8465e4b086       6e38f40d628db       Less than a second ago   Running             storage-provisioner       0                   c29a2999ea705       storage-provisioner
	602f053dab2cb       5780543258cf0       Less than a second ago   Running             kube-proxy                0                   ff82a19cb3867       kube-proxy-z7ffl
	840b2276764bc       08a0c939e61b7       18 seconds ago           Running             kube-apiserver            0                   03ee0e8d1443b       kube-apiserver-dockerenv-588031
	8ca62eae3e9cf       7cffc01dba0e1       18 seconds ago           Running             kube-controller-manager   0                   c11c5029a9367       kube-controller-manager-dockerenv-588031
	2ca7f35b20770       86b6af7dd652c       18 seconds ago           Running             etcd                      0                   5285ec55a572e       etcd-dockerenv-588031
	3ceec7c326e50       41697ceeb70b3       18 seconds ago           Running             kube-scheduler            0                   8b142ec5e4953       kube-scheduler-dockerenv-588031
	
	* 
	* ==> containerd <==
	* Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.350881599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.350907947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a605afb83f46f71b0808e7fb55dafafd9fc5e3fcf0d0719862ec07ba0e4d3739 pid=1699 runtime=io.containerd.runc.v2
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.351210695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff82a19cb3867043d47406f94e23092e2d1a22dd3f16d1391bfecc91d57833e1 pid=1698 runtime=io.containerd.runc.v2
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.516512029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e913fe82-6cb6-4e1c-8e25-acef3e0e0697,Namespace:kube-system,Attempt:0,}"
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.532498961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7ffl,Uid:ed23f511-eec5-409c-a44a-b7e8847f1935,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff82a19cb3867043d47406f94e23092e2d1a22dd3f16d1391bfecc91d57833e1\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.535207375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.535265282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.535275208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.535471317Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c29a2999ea705f326ae255aa0227ba66f9d9e81a8afc8768c00b53ec176efa03 pid=1798 runtime=io.containerd.runc.v2
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.535902412Z" level=info msg="CreateContainer within sandbox \"ff82a19cb3867043d47406f94e23092e2d1a22dd3f16d1391bfecc91d57833e1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.550557786Z" level=info msg="CreateContainer within sandbox \"ff82a19cb3867043d47406f94e23092e2d1a22dd3f16d1391bfecc91d57833e1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"602f053dab2cb6b4200bb1b113cf41822fcdcfe50465d65242d729cf3f2a2fd5\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.551261886Z" level=info msg="StartContainer for \"602f053dab2cb6b4200bb1b113cf41822fcdcfe50465d65242d729cf3f2a2fd5\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.654978578Z" level=info msg="StartContainer for \"602f053dab2cb6b4200bb1b113cf41822fcdcfe50465d65242d729cf3f2a2fd5\" returns successfully"
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.656583053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e913fe82-6cb6-4e1c-8e25-acef3e0e0697,Namespace:kube-system,Attempt:0,} returns sandbox id \"c29a2999ea705f326ae255aa0227ba66f9d9e81a8afc8768c00b53ec176efa03\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.659526694Z" level=info msg="CreateContainer within sandbox \"c29a2999ea705f326ae255aa0227ba66f9d9e81a8afc8768c00b53ec176efa03\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.726609590Z" level=info msg="CreateContainer within sandbox \"c29a2999ea705f326ae255aa0227ba66f9d9e81a8afc8768c00b53ec176efa03\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"abb8465e4b08600c1b31faf82d206b7213558b74c3d0ab6245492018c0e7362b\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.727582149Z" level=info msg="StartContainer for \"abb8465e4b08600c1b31faf82d206b7213558b74c3d0ab6245492018c0e7362b\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.742123332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-n99q7,Uid:05bcb275-cf98-4d9b-9690-660fc711c855,Namespace:kube-system,Attempt:0,} returns sandbox id \"a605afb83f46f71b0808e7fb55dafafd9fc5e3fcf0d0719862ec07ba0e4d3739\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.745762200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-95wks,Uid:ee7083f4-21be-4591-9a9b-2bc2bceb3262,Namespace:kube-system,Attempt:0,}"
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.751209381Z" level=info msg="CreateContainer within sandbox \"a605afb83f46f71b0808e7fb55dafafd9fc5e3fcf0d0719862ec07ba0e4d3739\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.825793524Z" level=info msg="CreateContainer within sandbox \"a605afb83f46f71b0808e7fb55dafafd9fc5e3fcf0d0719862ec07ba0e4d3739\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"7dffca574c701cd01bfb8362e46d22c2d229997c48eeb6c4672e1b8a64196eda\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.826696479Z" level=info msg="StartContainer for \"7dffca574c701cd01bfb8362e46d22c2d229997c48eeb6c4672e1b8a64196eda\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.838301877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-95wks,Uid:ee7083f4-21be-4591-9a9b-2bc2bceb3262,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\": failed to find network info for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\""
	Jul 10 23:56:26 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:26.848516401Z" level=info msg="StartContainer for \"abb8465e4b08600c1b31faf82d206b7213558b74c3d0ab6245492018c0e7362b\" returns successfully"
	Jul 10 23:56:27 dockerenv-588031 containerd[777]: time="2023-07-10T23:56:27.031804595Z" level=info msg="StartContainer for \"7dffca574c701cd01bfb8362e46d22c2d229997c48eeb6c4672e1b8a64196eda\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               dockerenv-588031
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-588031
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=330d5dd0bc9f186aa250423fa2021af06dc8c810
	                    minikube.k8s.io/name=dockerenv-588031
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_10T23_56_14_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jul 2023 23:56:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-588031
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jul 2023 23:56:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jul 2023 23:56:13 +0000   Mon, 10 Jul 2023 23:56:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jul 2023 23:56:13 +0000   Mon, 10 Jul 2023 23:56:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jul 2023 23:56:13 +0000   Mon, 10 Jul 2023 23:56:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jul 2023 23:56:13 +0000   Mon, 10 Jul 2023 23:56:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-588031
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 246c1df9ec7d4c4d97c6ab2d5c2c6f9a
	  System UUID:                6f4ff24b-f800-471d-a2de-8df14ffe686e
	  Boot ID:                    0e95073f-a93e-4fda-a556-1e020f5d02d3
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-95wks                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     1s
	  kube-system                 etcd-dockerenv-588031                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         14s
	  kube-system                 kindnet-n99q7                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2s
	  kube-system                 kube-apiserver-dockerenv-588031             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-controller-manager-dockerenv-588031    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 kube-proxy-z7ffl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-scheduler-dockerenv-588031             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 0s    kube-proxy       
	  Normal  Starting                 14s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s   kubelet          Node dockerenv-588031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s   kubelet          Node dockerenv-588031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s   kubelet          Node dockerenv-588031 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14s   kubelet          Node dockerenv-588031 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14s   kubelet          Node dockerenv-588031 status is now: NodeReady
	  Normal  RegisteredNode           2s    node-controller  Node dockerenv-588031 event: Registered Node dockerenv-588031 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul10 23:17]  #2
	[  +0.001122]  #3
	[  +0.000937]  #4
	[  +0.003156] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001888] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001305] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.004123]  #5
	[  +0.000714]  #6
	[  +0.000845]  #7
	[  +0.058325] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.547555] i8042: Warning: Keylock active
	[  +0.007364] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003248] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000736] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000630] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000618] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000661] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000665] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000731] platform eisa.0: Cannot allocate resource for EISA slot 8
	[ +10.103262] kauditd_printk_skb: 34 callbacks suppressed
	
	* 
	* ==> etcd [2ca7f35b2077078e79782fbc2ca4e229277261fb53b8d604f7bced50559b1c9d] <==
	* {"level":"info","ts":"2023-07-10T23:56:08.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-07-10T23:56:08.551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-07-10T23:56:08.553Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-10T23:56:08.553Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-10T23:56:08.553Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-10T23:56:08.553Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-10T23:56:08.553Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-10T23:56:09.441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-10T23:56:09.442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-10T23:56:09.442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-07-10T23:56:09.442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-07-10T23:56:09.442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-10T23:56:09.442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-07-10T23:56:09.442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-10T23:56:09.442Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-10T23:56:09.443Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-588031 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-10T23:56:09.443Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-10T23:56:09.443Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-10T23:56:09.443Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-10T23:56:09.443Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-10T23:56:09.443Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-10T23:56:09.444Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-10T23:56:09.444Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-10T23:56:09.445Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-10T23:56:09.445Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  23:56:27 up 38 min,  0 users,  load average: 1.47, 0.91, 0.40
	Linux dockerenv-588031 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [7dffca574c701cd01bfb8362e46d22c2d229997c48eeb6c4672e1b8a64196eda] <==
	* I0710 23:56:27.121562       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0710 23:56:27.121626       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0710 23:56:27.121765       1 main.go:116] setting mtu 1500 for CNI 
	I0710 23:56:27.121791       1 main.go:146] kindnetd IP family: "ipv4"
	I0710 23:56:27.121819       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [840b2276764bc874380a3f7912ee6443611d2abe354da5925aa1e357624de3e6] <==
	* I0710 23:56:10.636097       1 aggregator.go:152] initial CRD sync complete...
	I0710 23:56:10.636104       1 autoregister_controller.go:141] Starting autoregister controller
	I0710 23:56:10.636110       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0710 23:56:10.636117       1 cache.go:39] Caches are synced for autoregister controller
	I0710 23:56:10.636244       1 controller.go:624] quota admission added evaluator for: namespaces
	I0710 23:56:10.644183       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0710 23:56:10.720344       1 controller.go:150] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	E0710 23:56:10.721983       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0710 23:56:10.925552       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0710 23:56:11.303119       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0710 23:56:11.541411       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0710 23:56:11.544938       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0710 23:56:11.544957       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0710 23:56:11.908033       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0710 23:56:11.944136       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0710 23:56:12.039144       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0710 23:56:12.046836       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0710 23:56:12.047796       1 controller.go:624] quota admission added evaluator for: endpoints
	I0710 23:56:12.051512       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0710 23:56:12.570292       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0710 23:56:13.471916       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0710 23:56:13.481191       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0710 23:56:13.489948       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0710 23:56:25.977816       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0710 23:56:26.278016       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [8ca62eae3e9cf709058e321f32d1e47b9021515cf9e3bb196242e82f34915098] <==
	* I0710 23:56:25.414754       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0710 23:56:25.420046       1 shared_informer.go:318] Caches are synced for ephemeral
	I0710 23:56:25.420069       1 shared_informer.go:318] Caches are synced for HPA
	I0710 23:56:25.420080       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0710 23:56:25.421220       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0710 23:56:25.422409       1 shared_informer.go:318] Caches are synced for stateful set
	I0710 23:56:25.424710       1 shared_informer.go:318] Caches are synced for endpoint
	I0710 23:56:25.424732       1 shared_informer.go:318] Caches are synced for TTL
	I0710 23:56:25.426914       1 shared_informer.go:318] Caches are synced for deployment
	I0710 23:56:25.428946       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0710 23:56:25.434892       1 shared_informer.go:318] Caches are synced for namespace
	I0710 23:56:25.435310       1 shared_informer.go:318] Caches are synced for persistent volume
	I0710 23:56:25.442866       1 shared_informer.go:318] Caches are synced for disruption
	I0710 23:56:25.594278       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0710 23:56:25.621505       1 shared_informer.go:318] Caches are synced for job
	I0710 23:56:25.623720       1 shared_informer.go:318] Caches are synced for cronjob
	I0710 23:56:25.626300       1 shared_informer.go:318] Caches are synced for resource quota
	I0710 23:56:25.629634       1 shared_informer.go:318] Caches are synced for resource quota
	I0710 23:56:25.946260       1 shared_informer.go:318] Caches are synced for garbage collector
	I0710 23:56:25.968825       1 shared_informer.go:318] Caches are synced for garbage collector
	I0710 23:56:25.968856       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0710 23:56:25.986618       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z7ffl"
	I0710 23:56:25.988129       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n99q7"
	I0710 23:56:26.281868       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1"
	I0710 23:56:26.433949       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-95wks"
	
	* 
	* ==> kube-proxy [602f053dab2cb6b4200bb1b113cf41822fcdcfe50465d65242d729cf3f2a2fd5] <==
	* I0710 23:56:26.819503       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0710 23:56:26.819579       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0710 23:56:26.819605       1 server_others.go:554] "Using iptables proxy"
	I0710 23:56:26.847978       1 server_others.go:192] "Using iptables Proxier"
	I0710 23:56:26.848044       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0710 23:56:26.848056       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0710 23:56:26.848074       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0710 23:56:26.848132       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0710 23:56:26.848886       1 server.go:658] "Version info" version="v1.27.3"
	I0710 23:56:26.848911       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0710 23:56:26.849588       1 config.go:188] "Starting service config controller"
	I0710 23:56:26.849683       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0710 23:56:26.849861       1 config.go:315] "Starting node config controller"
	I0710 23:56:26.849876       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0710 23:56:26.850941       1 config.go:97] "Starting endpoint slice config controller"
	I0710 23:56:26.850973       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0710 23:56:26.950763       1 shared_informer.go:318] Caches are synced for service config
	I0710 23:56:26.950814       1 shared_informer.go:318] Caches are synced for node config
	I0710 23:56:26.951197       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [3ceec7c326e50910cc4a5567d670005c507fb5bfbb1b376f0dd73912fd418788] <==
	* W0710 23:56:10.642489       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0710 23:56:10.642497       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0710 23:56:10.642547       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0710 23:56:10.642555       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0710 23:56:10.642594       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0710 23:56:10.642601       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0710 23:56:10.642624       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0710 23:56:10.642630       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0710 23:56:10.642659       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0710 23:56:10.642666       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0710 23:56:10.642694       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0710 23:56:10.642700       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0710 23:56:10.642727       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0710 23:56:10.642735       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0710 23:56:10.642796       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0710 23:56:10.642807       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0710 23:56:10.642839       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0710 23:56:10.642846       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0710 23:56:11.551794       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0710 23:56:11.551822       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0710 23:56:11.552849       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0710 23:56:11.552875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0710 23:56:11.723049       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0710 23:56:11.723089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0710 23:56:12.038674       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 10 23:56:14 dockerenv-588031 kubelet[1500]: I0710 23:56:14.649006    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-588031" podStartSLOduration=2.648958691 podCreationTimestamp="2023-07-10 23:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-10 23:56:14.637206163 +0000 UTC m=+1.187911386" watchObservedRunningTime="2023-07-10 23:56:14.648958691 +0000 UTC m=+1.199663914"
	Jul 10 23:56:14 dockerenv-588031 kubelet[1500]: I0710 23:56:14.658073    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-588031" podStartSLOduration=1.6580049639999999 podCreationTimestamp="2023-07-10 23:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-10 23:56:14.649339752 +0000 UTC m=+1.200044975" watchObservedRunningTime="2023-07-10 23:56:14.658004964 +0000 UTC m=+1.208710188"
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: I0710 23:56:25.571329    1500 topology_manager.go:212] "Topology Admit Handler"
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: I0710 23:56:25.598269    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e913fe82-6cb6-4e1c-8e25-acef3e0e0697-tmp\") pod \"storage-provisioner\" (UID: \"e913fe82-6cb6-4e1c-8e25-acef3e0e0697\") " pod="kube-system/storage-provisioner"
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: I0710 23:56:25.598328    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxzb\" (UniqueName: \"kubernetes.io/projected/e913fe82-6cb6-4e1c-8e25-acef3e0e0697-kube-api-access-7nxzb\") pod \"storage-provisioner\" (UID: \"e913fe82-6cb6-4e1c-8e25-acef3e0e0697\") " pod="kube-system/storage-provisioner"
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: E0710 23:56:25.704367    1500 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: E0710 23:56:25.704401    1500 projected.go:198] Error preparing data for projected volume kube-api-access-7nxzb for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: E0710 23:56:25.704473    1500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e913fe82-6cb6-4e1c-8e25-acef3e0e0697-kube-api-access-7nxzb podName:e913fe82-6cb6-4e1c-8e25-acef3e0e0697 nodeName:}" failed. No retries permitted until 2023-07-10 23:56:26.204453083 +0000 UTC m=+12.755158296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7nxzb" (UniqueName: "kubernetes.io/projected/e913fe82-6cb6-4e1c-8e25-acef3e0e0697-kube-api-access-7nxzb") pod "storage-provisioner" (UID: "e913fe82-6cb6-4e1c-8e25-acef3e0e0697") : configmap "kube-root-ca.crt" not found
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: I0710 23:56:25.992613    1500 topology_manager.go:212] "Topology Admit Handler"
	Jul 10 23:56:25 dockerenv-588031 kubelet[1500]: I0710 23:56:25.995253    1500 topology_manager.go:212] "Topology Admit Handler"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.100876    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed23f511-eec5-409c-a44a-b7e8847f1935-kube-proxy\") pod \"kube-proxy-z7ffl\" (UID: \"ed23f511-eec5-409c-a44a-b7e8847f1935\") " pod="kube-system/kube-proxy-z7ffl"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.100924    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed23f511-eec5-409c-a44a-b7e8847f1935-lib-modules\") pod \"kube-proxy-z7ffl\" (UID: \"ed23f511-eec5-409c-a44a-b7e8847f1935\") " pod="kube-system/kube-proxy-z7ffl"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.100953    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qncqn\" (UniqueName: \"kubernetes.io/projected/ed23f511-eec5-409c-a44a-b7e8847f1935-kube-api-access-qncqn\") pod \"kube-proxy-z7ffl\" (UID: \"ed23f511-eec5-409c-a44a-b7e8847f1935\") " pod="kube-system/kube-proxy-z7ffl"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.101043    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qh5\" (UniqueName: \"kubernetes.io/projected/05bcb275-cf98-4d9b-9690-660fc711c855-kube-api-access-b7qh5\") pod \"kindnet-n99q7\" (UID: \"05bcb275-cf98-4d9b-9690-660fc711c855\") " pod="kube-system/kindnet-n99q7"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.101080    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed23f511-eec5-409c-a44a-b7e8847f1935-xtables-lock\") pod \"kube-proxy-z7ffl\" (UID: \"ed23f511-eec5-409c-a44a-b7e8847f1935\") " pod="kube-system/kube-proxy-z7ffl"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.101112    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05bcb275-cf98-4d9b-9690-660fc711c855-xtables-lock\") pod \"kindnet-n99q7\" (UID: \"05bcb275-cf98-4d9b-9690-660fc711c855\") " pod="kube-system/kindnet-n99q7"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.101220    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/05bcb275-cf98-4d9b-9690-660fc711c855-cni-cfg\") pod \"kindnet-n99q7\" (UID: \"05bcb275-cf98-4d9b-9690-660fc711c855\") " pod="kube-system/kindnet-n99q7"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.101263    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05bcb275-cf98-4d9b-9690-660fc711c855-lib-modules\") pod \"kindnet-n99q7\" (UID: \"05bcb275-cf98-4d9b-9690-660fc711c855\") " pod="kube-system/kindnet-n99q7"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.439252    1500 topology_manager.go:212] "Topology Admit Handler"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.616359    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49nnw\" (UniqueName: \"kubernetes.io/projected/ee7083f4-21be-4591-9a9b-2bc2bceb3262-kube-api-access-49nnw\") pod \"coredns-5d78c9869d-95wks\" (UID: \"ee7083f4-21be-4591-9a9b-2bc2bceb3262\") " pod="kube-system/coredns-5d78c9869d-95wks"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: I0710 23:56:26.616432    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee7083f4-21be-4591-9a9b-2bc2bceb3262-config-volume\") pod \"coredns-5d78c9869d-95wks\" (UID: \"ee7083f4-21be-4591-9a9b-2bc2bceb3262\") " pod="kube-system/coredns-5d78c9869d-95wks"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: E0710 23:56:26.839362    1500 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\": failed to find network info for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\""
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: E0710 23:56:26.839443    1500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\": failed to find network info for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\"" pod="kube-system/coredns-5d78c9869d-95wks"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: E0710 23:56:26.839473    1500 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\": failed to find network info for sandbox \"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\"" pod="kube-system/coredns-5d78c9869d-95wks"
	Jul 10 23:56:26 dockerenv-588031 kubelet[1500]: E0710 23:56:26.839545    1500 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-95wks_kube-system(ee7083f4-21be-4591-9a9b-2bc2bceb3262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-95wks_kube-system(ee7083f4-21be-4591-9a9b-2bc2bceb3262)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\\\": failed to find network info for sandbox \\\"966461c5a72cde7f6e733986caa922b9749efb3c6110e9baed938169d783559a\\\"\"" pod="kube-system/coredns-5d78c9869d-95wks" podUID=ee7083f4-21be-4591-9a9b-2bc2bceb3262
	
	* 
	* ==> storage-provisioner [abb8465e4b08600c1b31faf82d206b7213558b74c3d0ab6245492018c0e7362b] <==
	* I0710 23:56:26.917097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-588031 -n dockerenv-588031
helpers_test.go:261: (dbg) Run:  kubectl --context dockerenv-588031 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5d78c9869d-95wks
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context dockerenv-588031 describe pod coredns-5d78c9869d-95wks
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-588031 describe pod coredns-5d78c9869d-95wks: exit status 1 (58.244304ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5d78c9869d-95wks" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-588031 describe pod coredns-5d78c9869d-95wks: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-588031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-588031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-588031: (1.83059376s)
--- FAIL: TestDockerEnvContainerd (34.87s)
panic: runtime error: index out of range [1] with length 0 [recovered]
	panic: runtime error: index out of range [1] with length 0

                                                
                                                
goroutine 535 [running]:
testing.tRunner.func1.2({0x245e520, 0xc000f8a0d8})
	/usr/local/go/src/testing/testing.go:1526 +0x24e
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1529 +0x39f
panic({0x245e520, 0xc000f8a0d8})
	/usr/local/go/src/runtime/panic.go:884 +0x213
k8s.io/minikube/test/integration.TestDockerEnvContainerd(0xc000582d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:199 +0x12d2
testing.tRunner(0xc000582d00, 0x29a2a18)
	/usr/local/go/src/testing/testing.go:1576 +0x10b
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1629 +0x3ea

                                                
                                    

Test pass (21/30)

x
+
TestDownloadOnly/v1.16.0/json-events (16.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931368 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931368 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.295879594s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931368
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931368: exit status 85 (54.642545ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931368 | jenkins | v1.30.1 | 10 Jul 23 23:51 UTC |          |
	|         | -p download-only-931368        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/10 23:51:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0710 23:51:30.356599   14973 out.go:296] Setting OutFile to fd 1 ...
	I0710 23:51:30.356759   14973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0710 23:51:30.356767   14973 out.go:309] Setting ErrFile to fd 2...
	I0710 23:51:30.356772   14973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0710 23:51:30.357181   14973 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-8191/.minikube/bin
	W0710 23:51:30.357507   14973 root.go:313] Error reading config file at /home/jenkins/minikube-integration/15452-8191/.minikube/config/config.json: open /home/jenkins/minikube-integration/15452-8191/.minikube/config/config.json: no such file or directory
	I0710 23:51:30.358305   14973 out.go:303] Setting JSON to true
	I0710 23:51:30.359112   14973 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2038,"bootTime":1689031052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0710 23:51:30.359185   14973 start.go:137] virtualization: kvm guest
	I0710 23:51:30.361942   14973 out.go:97] [download-only-931368] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0710 23:51:30.363649   14973 out.go:169] MINIKUBE_LOCATION=15452
	W0710 23:51:30.362043   14973 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball: no such file or directory
	I0710 23:51:30.362075   14973 notify.go:220] Checking for updates...
	I0710 23:51:30.366513   14973 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0710 23:51:30.367988   14973 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15452-8191/kubeconfig
	I0710 23:51:30.369364   14973 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-8191/.minikube
	I0710 23:51:30.370581   14973 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0710 23:51:30.373011   14973 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0710 23:51:30.373215   14973 driver.go:373] Setting default libvirt URI to qemu:///system
	I0710 23:51:30.394424   14973 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0710 23:51:30.394497   14973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0710 23:51:30.746153   14973 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-10 23:51:30.73845835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0710 23:51:30.746251   14973 docker.go:294] overlay module found
	I0710 23:51:30.747998   14973 out.go:97] Using the docker driver based on user configuration
	I0710 23:51:30.748018   14973 start.go:297] selected driver: docker
	I0710 23:51:30.748023   14973 start.go:944] validating driver "docker" against <nil>
	I0710 23:51:30.748112   14973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0710 23:51:30.801499   14973 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-10 23:51:30.793309984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0710 23:51:30.801670   14973 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0710 23:51:30.802156   14973 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0710 23:51:30.802304   14973 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0710 23:51:30.804184   14973 out.go:169] Using Docker driver with root privileges
	I0710 23:51:30.805519   14973 cni.go:84] Creating CNI manager for ""
	I0710 23:51:30.805539   14973 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0710 23:51:30.805547   14973 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0710 23:51:30.805570   14973 start_flags.go:319] config:
	{Name:download-only-931368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-931368 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0710 23:51:30.807044   14973 out.go:97] Starting control plane node download-only-931368 in cluster download-only-931368
	I0710 23:51:30.807063   14973 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0710 23:51:30.808394   14973 out.go:97] Pulling base image ...
	I0710 23:51:30.808419   14973 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0710 23:51:30.808518   14973 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 in local docker daemon
	I0710 23:51:30.823372   14973 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 to local cache
	I0710 23:51:30.823530   14973 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 in local cache directory
	I0710 23:51:30.823610   14973 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 to local cache
	I0710 23:51:30.919127   14973 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0710 23:51:30.919213   14973 cache.go:57] Caching tarball of preloaded images
	I0710 23:51:30.919401   14973 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0710 23:51:30.921414   14973 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0710 23:51:30.921438   14973 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0710 23:51:31.035245   14973 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0710 23:51:43.215749   14973 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 as a tarball
	I0710 23:51:43.215857   14973 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0710 23:51:43.215920   14973 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0710 23:51:44.076059   14973 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0710 23:51:44.076360   14973 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/download-only-931368/config.json ...
	I0710 23:51:44.076389   14973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/download-only-931368/config.json: {Name:mk8cf6fe1d0cc8777899adde79415cf3c545e2d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0710 23:51:44.076569   14973 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0710 23:51:44.076757   14973 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15452-8191/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-931368"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (16.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931368 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931368 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.930671056s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (16.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931368
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931368: exit status 85 (53.67585ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931368 | jenkins | v1.30.1 | 10 Jul 23 23:51 UTC |          |
	|         | -p download-only-931368        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-931368 | jenkins | v1.30.1 | 10 Jul 23 23:51 UTC |          |
	|         | -p download-only-931368        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/10 23:51:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0710 23:51:46.710431   15139 out.go:296] Setting OutFile to fd 1 ...
	I0710 23:51:46.710584   15139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0710 23:51:46.710596   15139 out.go:309] Setting ErrFile to fd 2...
	I0710 23:51:46.710603   15139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0710 23:51:46.711071   15139 root.go:337] Updating PATH: /home/jenkins/minikube-integration/15452-8191/.minikube/bin
	W0710 23:51:46.711391   15139 root.go:313] Error reading config file at /home/jenkins/minikube-integration/15452-8191/.minikube/config/config.json: open /home/jenkins/minikube-integration/15452-8191/.minikube/config/config.json: no such file or directory
	I0710 23:51:46.712125   15139 out.go:303] Setting JSON to true
	I0710 23:51:46.712965   15139 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2055,"bootTime":1689031052,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0710 23:51:46.713050   15139 start.go:137] virtualization: kvm guest
	I0710 23:51:46.715024   15139 out.go:97] [download-only-931368] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0710 23:51:46.716564   15139 out.go:169] MINIKUBE_LOCATION=15452
	I0710 23:51:46.715150   15139 notify.go:220] Checking for updates...
	I0710 23:51:46.719291   15139 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0710 23:51:46.720668   15139 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15452-8191/kubeconfig
	I0710 23:51:46.722214   15139 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-8191/.minikube
	I0710 23:51:46.723598   15139 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0710 23:51:46.726346   15139 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0710 23:51:46.726706   15139 config.go:182] Loaded profile config "download-only-931368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0710 23:51:46.726742   15139 start.go:852] api.Load failed for download-only-931368: filestore "download-only-931368": Docker machine "download-only-931368" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0710 23:51:46.726832   15139 driver.go:373] Setting default libvirt URI to qemu:///system
	W0710 23:51:46.726861   15139 start.go:852] api.Load failed for download-only-931368: filestore "download-only-931368": Docker machine "download-only-931368" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0710 23:51:46.746791   15139 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0710 23:51:46.746897   15139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0710 23:51:46.795214   15139 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-07-10 23:51:46.787481733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0710 23:51:46.795307   15139 docker.go:294] overlay module found
	I0710 23:51:46.797317   15139 out.go:97] Using the docker driver based on existing profile
	I0710 23:51:46.797344   15139 start.go:297] selected driver: docker
	I0710 23:51:46.797348   15139 start.go:944] validating driver "docker" against &{Name:download-only-931368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-931368 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0710 23:51:46.797477   15139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0710 23:51:46.845911   15139 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-07-10 23:51:46.838190148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0710 23:51:46.846470   15139 cni.go:84] Creating CNI manager for ""
	I0710 23:51:46.846488   15139 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0710 23:51:46.846497   15139 start_flags.go:319] config:
	{Name:download-only-931368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-931368 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0710 23:51:46.848638   15139 out.go:97] Starting control plane node download-only-931368 in cluster download-only-931368
	I0710 23:51:46.848656   15139 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0710 23:51:46.850097   15139 out.go:97] Pulling base image ...
	I0710 23:51:46.850114   15139 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0710 23:51:46.850215   15139 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 in local docker daemon
	I0710 23:51:46.864703   15139 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 to local cache
	I0710 23:51:46.864830   15139 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 in local cache directory
	I0710 23:51:46.864845   15139 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 in local cache directory, skipping pull
	I0710 23:51:46.864852   15139 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 exists in cache, skipping pull
	I0710 23:51:46.864870   15139 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 as a tarball
	I0710 23:51:47.269861   15139 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0710 23:51:47.269890   15139 cache.go:57] Caching tarball of preloaded images
	I0710 23:51:47.270025   15139 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0710 23:51:47.271932   15139 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0710 23:51:47.271945   15139 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 ...
	I0710 23:51:47.384764   15139 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:1f83873e0026e1a370942079b65e1960 -> /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0710 23:52:00.095891   15139 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 ...
	I0710 23:52:00.095973   15139 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15452-8191/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 ...
	I0710 23:52:00.945080   15139 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0710 23:52:00.945202   15139 profile.go:148] Saving config to /home/jenkins/minikube-integration/15452-8191/.minikube/profiles/download-only-931368/config.json ...
	I0710 23:52:00.945407   15139 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0710 23:52:00.945622   15139 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/15452-8191/.minikube/cache/linux/amd64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-931368"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-931368
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-631311 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-631311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-631311
--- PASS: TestDownloadOnlyKic (1.15s)

                                                
                                    
x
+
TestBinaryMirror (2.26s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-045500 --alsologtostderr --binary-mirror http://127.0.0.1:46211 --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:304: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-045500 --alsologtostderr --binary-mirror http://127.0.0.1:46211 --driver=docker  --container-runtime=containerd: (1.846336254s)
helpers_test.go:175: Cleaning up "binary-mirror-045500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-045500
--- PASS: TestBinaryMirror (2.26s)

                                                
                                    
x
+
TestAddons/Setup (119.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-435442 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-435442 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m59.994362048s)
--- PASS: TestAddons/Setup (119.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 13.083718ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7x6kd" [a6cf4141-f53c-48c5-a4ca-f2c46d05fb90] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008926801s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-twkhk" [5e2e88d9-ae4c-4679-a7fc-0f00319c4360] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007839191s
addons_test.go:316: (dbg) Run:  kubectl --context addons-435442 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-435442 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-435442 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.259767869s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 ip
2023/07/10 23:54:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-435442 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-435442 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-435442 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b83cfd7a-c7d8-4b0a-9a16-137bd16be680] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b83cfd7a-c7d8-4b0a-9a16-137bd16be680] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009734855s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-435442 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-435442 addons disable ingress-dns --alsologtostderr -v=1: (1.163441937s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-435442 addons disable ingress --alsologtostderr -v=1: (7.459054471s)
--- PASS: TestAddons/parallel/Ingress (20.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (47.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8k98v" [552db29f-f5e8-4707-beb0-0aaee8a5e2e9] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010174273s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-435442
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-435442: (42.923680654s)
--- PASS: TestAddons/parallel/InspektorGadget (47.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 11.776652ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-6gw46" [cd6fe0b2-2151-413e-ae7f-e861ec94c975] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007839005s
addons_test.go:391: (dbg) Run:  kubectl --context addons-435442 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-435442 addons disable metrics-server --alsologtostderr -v=1: (1.306825684s)
--- PASS: TestAddons/parallel/MetricsServer (6.41s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 11.759992ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-d9zbp" [3c2834ef-fe7d-4b37-9b4e-5d7863a7a412] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008610505s
addons_test.go:449: (dbg) Run:  kubectl --context addons-435442 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-435442 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.346701762s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.465748ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-435442 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-435442 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1bb323a1-74ae-4772-94a1-3fcc2660519e] Pending
helpers_test.go:344: "task-pv-pod" [1bb323a1-74ae-4772-94a1-3fcc2660519e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1bb323a1-74ae-4772-94a1-3fcc2660519e] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005927134s
addons_test.go:560: (dbg) Run:  kubectl --context addons-435442 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435442 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435442 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-435442 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-435442 delete pod task-pv-pod: (1.854155673s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-435442 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-435442 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435442 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-435442 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5c65c431-cfb3-42b8-8458-0fe1353064f8] Pending
helpers_test.go:344: "task-pv-pod-restore" [5c65c431-cfb3-42b8-8458-0fe1353064f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5c65c431-cfb3-42b8-8458-0fe1353064f8] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006529979s
addons_test.go:602: (dbg) Run:  kubectl --context addons-435442 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-435442 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-435442 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-435442 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.384388854s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-435442 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-435442 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-fnbhp" [334cb0b0-3550-4dfa-8727-ee4ee626e6e4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-fnbhp" [334cb0b0-3550-4dfa-8727-ee4ee626e6e4] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006705813s
--- PASS: TestAddons/parallel/Headlamp (11.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-nczp2" [b4ae160b-42cf-4779-b5b4-a2f4d66ec3e9] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006686873s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-435442
--- PASS: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-435442 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-435442 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-435442
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-435442: (11.910648626s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-435442
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-435442
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-435442
--- PASS: TestAddons/StoppedEnableDisable (12.08s)

                                                
                                    

Test skip (8/30)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
Copied to clipboard